[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-20 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258269#comment-13258269
 ] 

Zhihong Yu commented on HBASE-5824:
---

Implication of this JIRA is that the type of exception thrown when certain 
constraint isn't satisfied differs depending on whether auto flush is enabled.
When I changed the test case slightly:
{code}
Index: src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java
===
--- src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java
(revision 1328376)
+++ src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java
(working copy)
@@ -105,7 +105,7 @@
 
 util.getHBaseAdmin().createTable(desc);
 HTable table = new HTable(util.getConfiguration(), tableName);
-table.setAutoFlush(true);
+table.setAutoFlush(false);
 
 // test that we do fail on violation
 Put put = new Put(row1);
{code}
I got:
{code}
testConstraintFails(org.apache.hadoop.hbase.constraint.TestConstraint)  Time 
elapsed: 4.144 sec   FAILURE!
java.lang.AssertionError
  at org.junit.Assert.fail(Assert.java:92)
  at org.junit.Assert.assertTrue(Assert.java:43)
  at org.junit.Assert.assertTrue(Assert.java:54)
  at 
org.apache.hadoop.hbase.constraint.TestConstraint.testConstraintFails(TestConstraint.java:118)
{code}
This makes exception handling unnecessarily complicated.

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5824.patch, hbase-5824_v2.patch, 
 hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-20 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258293#comment-13258293
 ] 

Zhihong Yu commented on HBASE-5824:
---

Constraint is in 0.94
This change causes regression in the way client responds to Constraint 
violation.
Now client has to deal with two exceptions instead of only one exception (in 
0.94)

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5824.patch, hbase-5824_v2.patch, 
 hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-20 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258313#comment-13258313
 ] 

Zhihong Yu commented on HBASE-5824:
---

In 0.94, client only needs to deal with RetriesExhaustedWithDetailsException 
whose only cause is a ConstraintException
With Jimmy's patch, client needs to deal with 
RetriesExhaustedWithDetailsException (not subclass of DoNotRetryIOException) 
and ConstraintException (subclass of DoNotRetryIOException). This is because 
there're two execution paths for Put.

I am afraid Benoit would have more to complain about in HBASE-5796 :-)

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5824.patch, hbase-5824_v2.patch, 
 hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-20 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258377#comment-13258377
 ] 

Zhihong Yu commented on HBASE-5824:
---

@Jimmy:
Shall we pursue your proposal in a separate JIRA so that we can keep the 
original intent of this JIRA ?

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: 5824-addendum-v2.txt, hbase-5824.patch, 
 hbase-5824_v2.patch, hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-20 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258390#comment-13258390
 ] 

Zhihong Yu commented on HBASE-5824:
---

I talked with Jimmy offline. We agree that single Put execution path isn't of 
high priority.
We will leave the discussion and decision making to HBASE-5845.

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: 5824-addendum-v2.txt, hbase-5824.patch, 
 hbase-5824_v2.patch, hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-20 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258400#comment-13258400
 ] 

Zhihong Yu commented on HBASE-5824:
---

Integrated addendum v2 to trunk.

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: 5824-addendum-v2.txt, hbase-5824.patch, 
 hbase-5824_v2.patch, hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5809) Avoid move api to take the destination server same as the source server.

2012-04-20 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13258474#comment-13258474
 ] 

Zhihong Yu commented on HBASE-5809:
---

I looped the test using:
{code}
~/runtest.sh 4 TestMasterObserver#testRegionTransitionOperations
{code}
and got the following:
{code}
Failed tests:   
testRegionTransitionOperations(org.apache.hadoop.hbase.coprocessor.TestMasterObserver):
 Coprocessor should have been called on region move
...
TestMasterObserver#testRegionTransitionOperations failed, iteration: 2
{code}

 Avoid move api to take the destination server same as the source server.
 

 Key: HBASE-5809
 URL: https://issues.apache.org/jira/browse/HBASE-5809
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.1
Reporter: ramkrishna.s.vasudevan
Assignee: rajeshbabu
Priority: Minor
  Labels: client
 Fix For: 0.96.0

 Attachments: HBASE-5809.patch, HBASE-5809.patch


 In Move currently we take any destination specified and if the destination is 
 same as the source we still do unassign and assign.  Here we can have 
 problems due to RegionAlreadyInTransitionException and thus hanging the 
 region in RIT for long time.  We can avoid this scenario by not allowing the 
 move to happen in this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5821) Incorrect handling of null value in Coprocessor aggregation function min()

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257578#comment-13257578
 ] 

Zhihong Yu commented on HBASE-5821:
---

Integrated to 0.92, 0.94 and trunk.

Thanks for the patch, Maryann.

Thanks for the review Himanshu.

 Incorrect handling of null value in Coprocessor aggregation function min()
 --

 Key: HBASE-5821
 URL: https://issues.apache.org/jira/browse/HBASE-5821
 Project: HBase
  Issue Type: Bug
  Components: coprocessors
Affects Versions: 0.92.1
Reporter: Maryann Xue
Assignee: Maryann Xue
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5821.patch


 Both in AggregateImplementation and AggregationClient, the evaluation of the 
 current minimum value is like:
 min = (min == null || ci.compare(result, min)  0) ? result : min;
 The LongColumnInterpreter takes null value is treated as the least value, 
 while the above expression takes min as the greater value when it is null. 
 Thus, the real minimum value gets discarded if a null value comes later.
 max() could also be wrong if a different ColumnInterpreter other than 
 LongColumnInterpreter treats null value differently (as the greatest).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5741) ImportTsv does not check for table existence

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257579#comment-13257579
 ] 

Zhihong Yu commented on HBASE-5741:
---

Integrated to 0.94 as well.

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0, 0.94.1

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5831) hadoopqa builds not completing

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257583#comment-13257583
 ] 

Zhihong Yu commented on HBASE-5831:
---

Interesting: TestLoadIncrementalHFilesSplitRecovery wasn't listed on trunk 
Jenkins build output.

@Jonathan H:
This test was introduced in HBASE-4552 and modified in HBASE-4740. Maybe you 
have some insight.

 hadoopqa builds not completing
 --

 Key: HBASE-5831
 URL: https://issues.apache.org/jira/browse/HBASE-5831
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
Priority: Blocker
 Attachments: 5831.remove.TestLoadIncrementalHFilesSplitRecovery.txt


 No test failures but build complains it has failed.  trunk build seems to 
 have the same affliction:
 {code}
 Results :
 Tests run: 909, Failures: 0, Errors: 0, Skipped: 9
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 41:19.273s
 [INFO] Finished at: Wed Apr 18 21:54:31 UTC 2012
 [INFO] Final Memory: 59M/451M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.12-TRUNK-HBASE-2:test 
 (secondPartTestsExecution) on project hbase: Failure or timeout - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 -1 overall.  Here are the results of testing the latest attachment 
   http://issues.apache.org/jira/secure/attachment/12523250/5811+%281%29.txt
   against trunk revision .
 +1 @author.  The patch does not contain any @author tags.
 +1 tests included.  The patch appears to include 3 new or modified tests.
 +1 javadoc.  The javadoc tool did not generate any warning messages.
 +1 javac.  The applied patch does not increase the total number of javac 
 compiler warnings.
 -1 findbugs.  The patch appears to introduce 6 new Findbugs (version 
 1.3.9) warnings.
 +1 release audit.  The applied patch does not increase the total number 
 of release audit warnings.
  -1 core tests.  The patch failed these unit tests:
 {code}
 Its not apparent that any particular test is not finishing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5831) hadoopqa builds not completing

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257587#comment-13257587
 ] 

Zhihong Yu commented on HBASE-5831:
---

I captured stack trace twice when the test was running and got similar result:
{code}
main prio=5 tid=102801000 nid=0x100601000 waiting on condition [1005fe000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  7854b0c00 (a 
java.util.concurrent.FutureTask$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:156)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:811)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:969)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1281)
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.bulkLoadPhase(LoadIncrementalHFiles.java:296)
at 
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:248)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testBulkLoadPhaseFailure(TestLoadIncrementalHFilesSplitRecovery.java:256)
{code}
Maybe disabling testBulkLoadPhaseFailure is enough for Jenkins to get back to 
normal.

 hadoopqa builds not completing
 --

 Key: HBASE-5831
 URL: https://issues.apache.org/jira/browse/HBASE-5831
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
Priority: Blocker
 Attachments: 5831.remove.TestLoadIncrementalHFilesSplitRecovery.txt, 
 5831.remove.TestLoadIncrementalHFilesSplitRecovery.txt


 No test failures but build complains it has failed.  trunk build seems to 
 have the same affliction:
 {code}
 Results :
 Tests run: 909, Failures: 0, Errors: 0, Skipped: 9
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 41:19.273s
 [INFO] Finished at: Wed Apr 18 21:54:31 UTC 2012
 [INFO] Final Memory: 59M/451M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.12-TRUNK-HBASE-2:test 
 (secondPartTestsExecution) on project hbase: Failure or timeout - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 -1 overall.  Here are the results of testing the latest attachment 
   http://issues.apache.org/jira/secure/attachment/12523250/5811+%281%29.txt
   against trunk revision .
 +1 @author.  The patch does not contain any @author tags.
 +1 tests included.  The patch appears to include 3 new or modified tests.
 +1 javadoc.  The javadoc tool did not generate any warning messages.
 +1 javac.  The applied patch does not increase the total number of javac 
 compiler warnings.
 -1 findbugs.  The patch appears to introduce 6 new Findbugs (version 
 1.3.9) warnings.
 +1 release audit.  The applied patch does not increase the total number 
 of release audit warnings.
  -1 core tests.  The patch failed these unit tests:
 {code}
 Its not apparent that any particular test is not finishing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5831) hadoopqa builds not completing

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257595#comment-13257595
 ] 

Zhihong Yu commented on HBASE-5831:
---

With the following change, I was able to run the test 6 times without hanging 
JVM:
{code}
Index: 
src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
===
--- 
src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
 (revision 1327779)
+++ 
src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
 (working copy)
@@ -220,7 +220,7 @@
* Test that shows that exception thrown from the RS side will result in an
* exception on the LIHFile client.
*/
-  @Test(expected=IOException.class)
+  // @Test(expected=IOException.class)
   public void testBulkLoadPhaseFailure() throws Exception {
 String table = bulkLoadPhaseFailure;
 setupTable(table, 10);
{code}

 hadoopqa builds not completing
 --

 Key: HBASE-5831
 URL: https://issues.apache.org/jira/browse/HBASE-5831
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
Priority: Blocker
 Attachments: 5831.remove.TestLoadIncrementalHFilesSplitRecovery.txt, 
 5831.remove.TestLoadIncrementalHFilesSplitRecovery.txt


 No test failures but build complains it has failed.  trunk build seems to 
 have the same affliction:
 {code}
 Results :
 Tests run: 909, Failures: 0, Errors: 0, Skipped: 9
 [INFO] 
 
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 41:19.273s
 [INFO] Finished at: Wed Apr 18 21:54:31 UTC 2012
 [INFO] Final Memory: 59M/451M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.12-TRUNK-HBASE-2:test 
 (secondPartTestsExecution) on project hbase: Failure or timeout - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 -1 overall.  Here are the results of testing the latest attachment 
   http://issues.apache.org/jira/secure/attachment/12523250/5811+%281%29.txt
   against trunk revision .
 +1 @author.  The patch does not contain any @author tags.
 +1 tests included.  The patch appears to include 3 new or modified tests.
 +1 javadoc.  The javadoc tool did not generate any warning messages.
 +1 javac.  The applied patch does not increase the total number of javac 
 compiler warnings.
 -1 findbugs.  The patch appears to introduce 6 new Findbugs (version 
 1.3.9) warnings.
 +1 release audit.  The applied patch does not increase the total number 
 of release audit warnings.
  -1 core tests.  The patch failed these unit tests:
 {code}
 Its not apparent that any particular test is not finishing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5809) Avoid move api to take the destination server same as the source server.

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257638#comment-13257638
 ] 

Zhihong Yu commented on HBASE-5809:
---

+1 on patch.

 Avoid move api to take the destination server same as the source server.
 

 Key: HBASE-5809
 URL: https://issues.apache.org/jira/browse/HBASE-5809
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
Reporter: ramkrishna.s.vasudevan
Priority: Minor
  Labels: patch
 Attachments: HBASE-5809.patch


 In Move currently we take any destination specified and if the destination is 
 same as the source we still do unassign and assign.  Here we can have 
 problems due to RegionAlreadyInTransitionException and thus hanging the 
 region in RIT for long time.  We can avoid this scenario by not allowing the 
 move to happen in this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5809) Avoid move api to take the destination server same as the source server.

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257677#comment-13257677
 ] 

Zhihong Yu commented on HBASE-5809:
---

You meant 'unless objection', I assume :-)

 Avoid move api to take the destination server same as the source server.
 

 Key: HBASE-5809
 URL: https://issues.apache.org/jira/browse/HBASE-5809
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.1
Reporter: ramkrishna.s.vasudevan
Priority: Minor
  Labels: patch
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-5809.patch


 In Move currently we take any destination specified and if the destination is 
 same as the source we still do unassign and assign.  Here we can have 
 problems due to RegionAlreadyInTransitionException and thus hanging the 
 region in RIT for long time.  We can avoid this scenario by not allowing the 
 move to happen in this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257859#comment-13257859
 ] 

Zhihong Yu commented on HBASE-5824:
---

{code}
+if (autoFlush  puts.size() == 1) {
{code}
How is autoFlush related to single put ?

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Attachments: hbase-5824.patch, hbase-5824_v2.patch


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5635) If getTaskList() returns null splitlogWorker is down. It wont serve any requests.

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257869#comment-13257869
 ] 

Zhihong Yu commented on HBASE-5635:
---

@Anoop:
hbase.splitlog.zk.retries is used by SplitLogManager

Does anyone have comment about latest patch ?

 If getTaskList() returns null splitlogWorker is down. It wont serve any 
 requests. 
 --

 Key: HBASE-5635
 URL: https://issues.apache.org/jira/browse/HBASE-5635
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.92.1
Reporter: Kristam Subba Swathi
 Attachments: HBASE-5635.1.patch, HBASE-5635.2.patch, HBASE-5635.patch


 During the hlog split operation if all the zookeepers are down ,then the 
 paths will be returned as null and the splitworker thread wil be exited
 Now this regionserver wil not be able to acquire any other tasks since the 
 splitworker thread is exited
 Please find the attached code for more details
 {code}
 private ListString getTaskList() {
 for (int i = 0; i  zkretries; i++) {
   try {
 return (ZKUtil.listChildrenAndWatchForNewChildren(this.watcher,
 this.watcher.splitLogZNode));
   } catch (KeeperException e) {
 LOG.warn(Could not get children of znode  +
 this.watcher.splitLogZNode, e);
 try {
   Thread.sleep(1000);
 } catch (InterruptedException e1) {
   LOG.warn(Interrupted while trying to get task list ..., e1);
   Thread.currentThread().interrupt();
   return null;
 }
   }
 }
 {code}
 in the org.apache.hadoop.hbase.regionserver.SplitLogWorker 
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257874#comment-13257874
 ] 

Zhihong Yu commented on HBASE-5824:
---

Minor comment:
The line below exceeds 100 characters.
{code}
+// we need to periodically see if the writebuffer is full instead of 
waiting until the end of the List
{code}

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Attachments: hbase-5824.patch, hbase-5824_v2.patch


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257961#comment-13257961
 ] 

Zhihong Yu commented on HBASE-5824:
---

+1 on Addendum.
The two tests pass with addendum:
{code}
  542  mt -Dtest=TestConstraint
  543  mt -Dtest=TestRegionServerCoprocessorExceptionWithRemove
{code}

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5824.patch, hbase-5824_v2.patch, 
 hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5824) HRegion.incrementColumnValue is not used in trunk

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257963#comment-13257963
 ] 

Zhihong Yu commented on HBASE-5824:
---

Integrated addendum to TRUNK.

Thanks for the quick turn around, Jimmy.

 HRegion.incrementColumnValue is not used in trunk
 -

 Key: HBASE-5824
 URL: https://issues.apache.org/jira/browse/HBASE-5824
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5824.patch, hbase-5824_v2.patch, 
 hbase_5824.addendum


 on 0.94 a call to client.HTable#incrementColumnValue will cause 
 HRegion#incrementColumnValue.  On trunk all calls to 
 HTable.incrementColumnValue got to HRegion#increment.
 My guess is that HTable#incrementColumnValue and HTable#increment serialize 
 to the same thing over the wire so that the remote HRegionServer no longer 
 knows which htable method was called.
 To repro I checked out trunk and put a break point in 
 HRegion#incrementColumnValue and then ran TestFromClientSide.  The breakpoint 
 wasn't hit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5833) 0.92 build has been failing pretty consistently on TestMasterFailover....

2012-04-19 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257971#comment-13257971
 ] 

Zhihong Yu commented on HBASE-5833:
---

In build #380, TestMasterFailover hung again:
{code}
Running org.apache.hadoop.hbase.master.TestMasterFailover
Running org.apache.hadoop.hbase.master.TestClockSkewDetection
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec
{code}

 0.92 build has been failing pretty consistently on TestMasterFailover
 -

 Key: HBASE-5833
 URL: https://issues.apache.org/jira/browse/HBASE-5833
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.92.2

 Attachments: 5833.txt


 Trunk seems fine but 0.92 fails on this test pretty regularly.  Running it 
 local it seems to hang for me.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5790) ZKUtil deleteRecursively should be a recoverable operation

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256628#comment-13256628
 ] 

Zhihong Yu commented on HBASE-5790:
---

Our codebase uses NIOServerCnxnFactory which is not in zookeeper 3.3
Meaning zookeeper 3.4 is required.

 ZKUtil deleteRecursively should be a recoverable operation
 --

 Key: HBASE-5790
 URL: https://issues.apache.org/jira/browse/HBASE-5790
 Project: HBase
  Issue Type: Improvement
Reporter: Jesse Yates
Assignee: Jesse Yates
  Labels: zookeeper
 Fix For: 0.96.0, 0.94.1

 Attachments: java_HBASE-5790-v1.patch, java_HBASE-5790.patch


 As of 3.4.3 Zookeeper now has full, multi-operation transaction. This means 
 we can wholesale delete chunks of the zk tree and ensure that we don't have 
 any pesky recursive delete issues where we delete the children of a node, but 
 then a child joins before deletion of the parent. Even without transactions, 
 this should be the behavior, but it is possible to make it much cleaner now 
 that we have this new feature in zk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5737) Minor Improvements related to balancer.

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256653#comment-13256653
 ] 

Zhihong Yu commented on HBASE-5737:
---

Latest patch is good.
Please update Fix Version/s.

 Minor Improvements related to balancer.
 ---

 Key: HBASE-5737
 URL: https://issues.apache.org/jira/browse/HBASE-5737
 Project: HBase
  Issue Type: Improvement
  Components: master
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Attachments: HBASE-5737.patch, HBASE-5737_1.patch, 
 HBASE-5737_2.patch, HBASE-5737_3.patch


 Currently in Am.getAssignmentByTable()  we use a result map which is currenly 
 a hashmap.  It could be better if we have a treeMap.  Even in 
 MetaReader.fullScan we have the treeMap only so that we have the naming order 
 maintained. I felt this change could be very useful in cases where we are 
 extending the DefaultLoadBalancer.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5821) Incorrect handling of null value in Coprocessor aggregation function min()

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256658#comment-13256658
 ] 

Zhihong Yu commented on HBASE-5821:
---

Patch makes sense.

 Incorrect handling of null value in Coprocessor aggregation function min()
 --

 Key: HBASE-5821
 URL: https://issues.apache.org/jira/browse/HBASE-5821
 Project: HBase
  Issue Type: Bug
  Components: coprocessors
Affects Versions: 0.92.1
Reporter: Maryann Xue
Assignee: Maryann Xue
 Attachments: HBASE-5821.patch


 Both in AggregateImplementation and AggregationClient, the evaluation of the 
 current minimum value is like:
 min = (min == null || ci.compare(result, min)  0) ? result : min;
 The LongColumnInterpreter takes null value is treated as the least value, 
 while the above expression takes min as the greater value when it is null. 
 Thus, the real minimum value gets discarded if a null value comes later.
 max() could also be wrong if a different ColumnInterpreter other than 
 LongColumnInterpreter treats null value differently (as the greatest).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5787) Table owner can't disable/delete its own table

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256777#comment-13256777
 ] 

Zhihong Yu commented on HBASE-5787:
---

Inetgrated to trunk.

I think this should go to 0.94.0 as well.
What do you think, Lars ?

 Table owner can't disable/delete its own table
 --

 Key: HBASE-5787
 URL: https://issues.apache.org/jira/browse/HBASE-5787
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: acl, security
 Attachments: HBASE-5787-tests-wrong-names.patch, HBASE-5787-v0.patch, 
 HBASE-5787-v1.patch


 An user with CREATE privileges can create a table, but can not disable it, 
 because disable operation require ADMIN privileges. Also if a table is 
 already disabled, anyone can remove it.
 {code}
 public void preDeleteTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   requirePermission(Permission.Action.CREATE);
 }
 public void preDisableTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   /* TODO: Allow for users with global CREATE permission and the table owner 
 */
   requirePermission(Permission.Action.ADMIN);
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5787) Table owner can't disable/delete its own table

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256862#comment-13256862
 ] 

Zhihong Yu commented on HBASE-5787:
---

I applied latest patch to 0.92 and was able to run the test:
{code}
Running org.apache.hadoop.hbase.security.access.TestAccessController
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.115 sec

Results :

Tests run: 21, Failures: 0, Errors: 0, Skipped: 0
{code}

 Table owner can't disable/delete its own table
 --

 Key: HBASE-5787
 URL: https://issues.apache.org/jira/browse/HBASE-5787
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: acl, security
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5787-tests-wrong-names.patch, HBASE-5787-v0.patch, 
 HBASE-5787-v1.patch


 An user with CREATE privileges can create a table, but can not disable it, 
 because disable operation require ADMIN privileges. Also if a table is 
 already disabled, anyone can remove it.
 {code}
 public void preDeleteTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   requirePermission(Permission.Action.CREATE);
 }
 public void preDisableTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   /* TODO: Allow for users with global CREATE permission and the table owner 
 */
   requirePermission(Permission.Action.ADMIN);
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5547) Don't delete HFiles when in backup mode

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256887#comment-13256887
 ] 

Zhihong Yu commented on HBASE-5547:
---

Currently we have the following in enable():
{code}
// then add the table to the list of znodes to archive
String tableNode = getTableNode(zooKeeper, table);
ZKUtil.createSetData(zooKeeper, tableNode, archive);
{code}
If we encode the backup mode in the data associated with the table node, the 
region servers would be able to register watcher for the table nodes (under 
zooKeeper.archiveHFileZNode). This way region servers would get notification 
from zk about whether backup mode is effective.

 Don't delete HFiles when in backup mode
 -

 Key: HBASE-5547
 URL: https://issues.apache.org/jira/browse/HBASE-5547
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Jesse Yates

 This came up in a discussion I had with Stack.
 It would be nice if HBase could be notified that a backup is in progress (via 
 a znode for example) and in that case either:
 1. rename HFiles to be delete to file.bck
 2. rename the HFiles into a special directory
 3. rename them to a general trash directory (which would not need to be tied 
 to backup mode).
 That way it should be able to get a consistent backup based on HFiles (HDFS 
 snapshots or hard links would be better options here, but we do not have 
 those).
 #1 makes cleanup a bit harder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5732) Remove the SecureRPCEngine and merge the security-related logic in the core engine

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257053#comment-13257053
 ] 

Zhihong Yu commented on HBASE-5732:
---

We haven't put zookeeper 3.4.x as requirement for 0.96 yet.
In testing this work, please make sure zookeeper 3.3.x ensemble can be used for 
the insecure RPC.

 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine
 --

 Key: HBASE-5732
 URL: https://issues.apache.org/jira/browse/HBASE-5732
 Project: HBase
  Issue Type: Improvement
Reporter: Devaraj Das
 Attachments: rpcengine-merge.patch


 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine. Follow up to HBASE-5727.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5625) Avoid byte buffer allocations when reading a value from a Result object

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257086#comment-13257086
 ] 

Zhihong Yu commented on HBASE-5625:
---

@Stack:
Can you give Tudor some advice ?

 Avoid byte buffer allocations when reading a value from a Result object
 ---

 Key: HBASE-5625
 URL: https://issues.apache.org/jira/browse/HBASE-5625
 Project: HBase
  Issue Type: Improvement
  Components: client
Affects Versions: 0.92.1
Reporter: Tudor Scurtu
Assignee: Tudor Scurtu
  Labels: patch
 Fix For: 0.96.0

 Attachments: 5625.txt, 5625v2.txt, 5625v3.txt, 5625v4.txt, 
 5625v5.txt, 5625v6.txt


 When calling Result.getValue(), an extra dummy KeyValue and its associated 
 underlying byte array are allocated, as well as a persistent buffer that will 
 contain the returned value.
 These can be avoided by reusing a static array for the dummy object and by 
 passing a ByteBuffer object as a value destination buffer to the read method.
 The current functionality is maintained, and we have added a separate method 
 call stack that employs the described changes. I will provide more details 
 with the patch.
 Running tests with a profiler, the reduction of read time seems to be of up 
 to 40%.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257100#comment-13257100
 ] 

Zhihong Yu commented on HBASE-5782:
---

Do we need to log another issue for trunk which would finish Todd's work ?

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: Lars Hofhansl
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-lars-v2.txt, 5782-sketch.txt, 5782-v3.txt, 
 5782.txt, 5782.unfinished-stack.txt, 5782.unittest.txt, HBASE-5782.patch, 
 hbase-5782.txt


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5349) Automagically tweak global memstore and block cache sizes based on workload

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257103#comment-13257103
 ] 

Zhihong Yu commented on HBASE-5349:
---

This is a plan worth pursuing.

 Automagically tweak global memstore and block cache sizes based on workload
 ---

 Key: HBASE-5349
 URL: https://issues.apache.org/jira/browse/HBASE-5349
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.92.0
Reporter: Jean-Daniel Cryans
 Fix For: 0.96.0


 Hypertable does a neat thing where it changes the size given to the CellCache 
 (our MemStores) and Block Cache based on the workload. If you need an image, 
 scroll down at the bottom of this link: 
 http://www.hypertable.com/documentation/architecture/
 That'd be one less thing to configure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5787) Table owner can't disable/delete its own table

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257119#comment-13257119
 ] 

Zhihong Yu commented on HBASE-5787:
---

Thanks for the information, Andy.

Integrated to 0.92 and 0.94 as well.

Thanks for the patch, Matteo.

 Table owner can't disable/delete its own table
 --

 Key: HBASE-5787
 URL: https://issues.apache.org/jira/browse/HBASE-5787
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: acl, security
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5787-tests-wrong-names.patch, HBASE-5787-v0.patch, 
 HBASE-5787-v1.patch


 An user with CREATE privileges can create a table, but can not disable it, 
 because disable operation require ADMIN privileges. Also if a table is 
 already disabled, anyone can remove it.
 {code}
 public void preDeleteTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   requirePermission(Permission.Action.CREATE);
 }
 public void preDisableTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   /* TODO: Allow for users with global CREATE permission and the table owner 
 */
   requirePermission(Permission.Action.ADMIN);
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5816) Balancer and ServerShutdownHandler concurrently reassigning the same region

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257217#comment-13257217
 ] 

Zhihong Yu commented on HBASE-5816:
---

HBASE-5396 was only integrated to 0.90
So it shouldn't be the cause for problem in trunk.

 Balancer and ServerShutdownHandler concurrently reassigning the same region
 ---

 Key: HBASE-5816
 URL: https://issues.apache.org/jira/browse/HBASE-5816
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.90.6
Reporter: Maryann Xue
Priority: Critical
 Attachments: HBASE-5816.patch


 The first assign thread exits with success after updating the RegionState to 
 PENDING_OPEN, while the second assign follows immediately into assign and 
 fails the RegionState check in setOfflineInZooKeeper(). This causes the 
 master to abort.
 In the below case, the two concurrent assigns occurred when AM tried to 
 assign a region to a dying/dead RS, and meanwhile the ShutdownServerHandler 
 tried to assign this region (from the region plan) spontaneously.
 2012-04-17 05:44:57,648 INFO org.apache.hadoop.hbase.master.HMaster: balance 
 hri=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b., 
 src=hadoop05.sh.intel.com,60020,1334544902186, 
 dest=xmlqa-clv16.sh.intel.com,60020,1334612497253
 2012-04-17 05:44:57,648 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
 region TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. 
 (offlining)
 2012-04-17 05:44:57,648 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Sent CLOSE to 
 serverName=hadoop05.sh.intel.com,60020,1334544902186, load=(requests=0, 
 regions=0, usedHeap=0, maxHeap=0) for region 
 TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.
 2012-04-17 05:44:57,666 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Handling new unassigned 
 node: /hbase/unassigned/fe38fe31caf40b6e607a3e6bbed6404b 
 (region=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.,
  server=hadoop05.sh.intel.com,60020,1334544902186, state=RS_ZK_REGION_CLOSING)
 2012-04-17 05:52:58,984 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
 was=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. 
 state=CLOSED, ts=1334612697672, 
 server=hadoop05.sh.intel.com,60020,1334544902186
 2012-04-17 05:52:58,984 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 master:6-0x236b912e9b3000e Creating (or updating) unassigned node for 
 fe38fe31caf40b6e607a3e6bbed6404b with OFFLINE state
 2012-04-17 05:52:59,096 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Using pre-existing plan for 
 region TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.; 
 plan=hri=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b.,
  src=hadoop05.sh.intel.com,60020,1334544902186, 
 dest=xmlqa-clv16.sh.intel.com,60020,1334612497253
 2012-04-17 05:52:59,096 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
 TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. to 
 xmlqa-clv16.sh.intel.com,60020,1334612497253
 2012-04-17 05:54:19,159 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
 was=TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. 
 state=PENDING_OPEN, ts=1334613179096, 
 server=xmlqa-clv16.sh.intel.com,60020,1334612497253
 2012-04-17 05:54:59,033 WARN 
 org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of 
 TABLE_ORDER_CUSTOMER,,1334017820846.fe38fe31caf40b6e607a3e6bbed6404b. to 
 serverName=xmlqa-clv16.sh.intel.com,60020,1334612497253, load=(requests=0, 
 regions=0, usedHeap=0, maxHeap=0), trying to assign elsewhere instead; retry=0
 java.net.SocketTimeoutException: Call to /10.239.47.87:60020 failed on socket 
 timeout exception: java.net.SocketTimeoutException: 12 millis timeout 
 while waiting for channel to be ready for read. ch : 
 java.nio.channels.SocketChannel[connected local=/10.239.47.89:41302 
 remote=/10.239.47.87:60020]
 at 
 org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:805)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:778)
 at 
 org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:283)
 at $Proxy7.openRegion(Unknown Source)
 at 
 org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:573)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1127)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:912)
 at 
 

[jira] [Commented] (HBASE-5821) Incorrect handling of null value in Coprocessor aggregation function min()

2012-04-18 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13257235#comment-13257235
 ] 

Zhihong Yu commented on HBASE-5821:
---

@Himanshu:
The LCM above refers to LongColumnInterpreter, right ?
Are you fine with Maryann's patch ?

 Incorrect handling of null value in Coprocessor aggregation function min()
 --

 Key: HBASE-5821
 URL: https://issues.apache.org/jira/browse/HBASE-5821
 Project: HBase
  Issue Type: Bug
  Components: coprocessors
Affects Versions: 0.92.1
Reporter: Maryann Xue
Assignee: Maryann Xue
 Attachments: HBASE-5821.patch


 Both in AggregateImplementation and AggregationClient, the evaluation of the 
 current minimum value is like:
 min = (min == null || ci.compare(result, min)  0) ? result : min;
 The LongColumnInterpreter takes null value is treated as the least value, 
 while the above expression takes min as the greater value when it is null. 
 Thus, the real minimum value gets discarded if a null value comes later.
 max() could also be wrong if a different ColumnInterpreter other than 
 LongColumnInterpreter treats null value differently (as the greatest).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB

2012-04-17 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1324#comment-1324
 ] 

Zhihong Yu commented on HBASE-5620:
---

@Stack:
Do you think the addendum is ready to be integrated ?

 Convert the client protocol of HRegionInterface to PB
 -

 Key: HBASE-5620
 URL: https://issues.apache.org/jira/browse/HBASE-5620
 Project: HBase
  Issue Type: Sub-task
  Components: ipc, master, migration, regionserver
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5620-sec.patch, hbase-5620_v3.patch, 
 hbase-5620_v4.patch, hbase-5620_v4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4435) Add Group By functionality using Coprocessors

2012-04-17 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1327#comment-1327
 ] 

Zhihong Yu commented on HBASE-4435:
---

@Nicole:
The attached patch is half year old. Do you have a newer version ?

 Add Group By functionality using Coprocessors
 -

 Key: HBASE-4435
 URL: https://issues.apache.org/jira/browse/HBASE-4435
 Project: HBase
  Issue Type: Improvement
  Components: coprocessors
Reporter: Nichole Treadway
Priority: Minor
 Attachments: HBase-4435.patch


 Adds in a Group By -like functionality to HBase, using the Coprocessor 
 framework. 
 It provides the ability to group the result set on one or more columns 
 (groupBy families). It computes statistics (max, min, sum, count, sum of 
 squares, number missing) for a second column, called the stats column. 
 To use, I've provided two implementations.
 1. In the first, you specify a single group-by column and a stats field:
   statsMap = gbc.getStats(tableName, scan, groupByFamily, 
 groupByQualifier, statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The result is a map with the Group By column value (as a String) to a 
 GroupByStatsValues object. The GroupByStatsValues object has max,min,sum etc. 
 of the stats column for that group.
 2. The second implementation allows you to specify a list of group-by columns 
 and a stats field. The List of group-by columns is expected to contain lists 
 of {column family, qualifier} pairs. 
   statsMap = gbc.getStats(tableName, scan, listOfGroupByColumns, 
 statsFamily, statsQualifier, statsFieldColumnInterpreter);
 The GroupByStatsValues code is adapted from the Solr Stats component.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5545) region can't be opened for a long time. Because the creating File failed.

2012-04-17 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255780#comment-13255780
 ] 

Zhihong Yu commented on HBASE-5545:
---

{code}
+// in the .tmp directory then on next time creation we will be getting
{code}
Should read 'then the next creation ...'
{code}
+  fs.delete(tmpPath, true);
{code}
Please check the return value from fs.delete().

 region can't be opened for a long time. Because the creating File failed.
 -

 Key: HBASE-5545
 URL: https://issues.apache.org/jira/browse/HBASE-5545
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.6
Reporter: gaojinchao
Assignee: gaojinchao
 Fix For: 0.90.7, 0.92.2, 0.94.0

 Attachments: HBASE-5545.patch


 Scenario:
 
 1. File is created 
 2. But while writing data, all datanodes might have crashed. So writing data 
 will fail.
 3. Now even if close is called in finally block, close also will fail and 
 throw the Exception because writing data failed.
 4. After this if RS try to create the same file again, then 
 AlreadyBeingCreatedException will come.
 Suggestion to handle this scenario.
 ---
 1. Check for the existence of the file, if exists delete the file and create 
 new file. 
 Here delete call for the file will not check whether the file is open or 
 closed.
 Overwrite Option:
 
 1. Overwrite option will be applicable if you are trying to overwrite a 
 closed file.
 2. If the file is not closed, then even with overwrite option Same 
 AlreadyBeingCreatedException will be thrown.
 This is the expected behaviour to avoid the Multiple clients writing to same 
 file.
 Region server logs:
 org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to 
 create file /hbase/test1/12c01902324218d14b17a5880f24f64b/.tmp/.regioninfo 
 for 
 DFSClient_hb_rs_158-1-131-48,20020,1331107668635_1331107669061_-252463556_25 
 on client 158.1.132.19 because current leaseholder is trying to recreate file.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1570)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1440)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1382)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:658)
 at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:547)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1137)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1133)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1131)
 at org.apache.hadoop.ipc.Client.call(Client.java:961)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:245)
 at $Proxy6.create(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at $Proxy6.create(Unknown Source)
 at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.init(DFSClient.java:3643)
 at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:778)
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:364)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:630)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:611)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:518)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:424)
 at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340)
 at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2672)
 at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2658)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:330)
 at 
 org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:116)
 at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 [2012-03-07 20:51:45,858] [WARN ] 
 

[jira] [Commented] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-17 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255834#comment-13255834
 ] 

Zhihong Yu commented on HBASE-5782:
---

HLogPerformanceEvaluation verification is needed as well.

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-lars-v2.txt, 5782-sketch.txt, 5782.txt, 
 5782.unfinished-stack.txt, HBASE-5782.patch, hbase-5782.txt


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5741) ImportTsv does not check for table existence

2012-04-17 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256107#comment-13256107
 ] 

Zhihong Yu commented on HBASE-5741:
---

Integrated to trunk.

Thanks for the patch Himanshu.

Will wait for 0.94.0 to come out before integrating to 0.94

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0, 0.94.1

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5733) AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.

2012-04-17 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256165#comment-13256165
 ] 

Zhihong Yu commented on HBASE-5733:
---

From Hadoop QA test output, I didn't find the hanging test.

Integrated to trunk.

Thanks for the patch Uma.

Thanks for the review, Stack.

 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.
 -

 Key: HBASE-5733
 URL: https://issues.apache.org/jira/browse/HBASE-5733
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5733.patch, HBASE-5733.patch, HBASE-5733.patch


 Found while going through the code...
 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE 
 as this is directly iterating the nodes from 
 listChildrenAndWatchForNewChildren with-out checking for null.
 Here also we need to handle with  null  check like other places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-17 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13256187#comment-13256187
 ] 

Zhihong Yu commented on HBASE-5782:
---

+1 on such a test which would prevent regression.

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-lars-v2.txt, 5782-sketch.txt, 5782.txt, 
 5782.unfinished-stack.txt, HBASE-5782.patch, hbase-5782.txt


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254718#comment-13254718
 ] 

Zhihong Yu commented on HBASE-5795:
---

VersionedWritable.readFields() would detect version mismatch and throw 
exception.

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0

 Attachments: 5795-v2.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5733) AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254757#comment-13254757
 ] 

Zhihong Yu commented on HBASE-5733:
---

@Uma:
Can you generate a patch for trunk ?
I got the following when I tried to apply your patch to trunk:
{code}
[ERROR] 
/Users/zhihyu/trunk-hbase/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java:[495,75]
 unreported exception com.google.protobuf.ServiceException; must be caught or 
declared to be thrown
{code}

 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.
 -

 Key: HBASE-5733
 URL: https://issues.apache.org/jira/browse/HBASE-5733
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5733.patch


 Found while going through the code...
 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE 
 as this is directly iterating the nodes from 
 listChildrenAndWatchForNewChildren with-out checking for null.
 Here also we need to handle with  null  check like other places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5104) Provide a reliable intra-row pagination mechanism

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254820#comment-13254820
 ] 

Zhihong Yu commented on HBASE-5104:
---

Patch didn't apply cleanly:
{code}
/usr/bin/patch:  malformed patch at line 285: Index: 
src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
{code}

 Provide a reliable intra-row pagination mechanism
 -

 Key: HBASE-5104
 URL: https://issues.apache.org/jira/browse/HBASE-5104
 Project: HBase
  Issue Type: Bug
Reporter: Kannan Muthukkaruppan
Assignee: Madhuwanti Vaidya
 Attachments: D2799.1.patch, D2799.2.patch, D2799.3.patch, 
 testFilterList.rb


 Addendum:
 Doing pagination (retrieving at most limit number of KVs at a particular 
 offset) is currently supported via the ColumnPaginationFilter. However, it 
 is not a very clean way of supporting pagination.  Some of the problems with 
 it are:
 * Normally, one would expect a query with (Filter(A) AND Filter(B)) to have 
 same results as (query with Filter(A)) INTERSECT (query with Filter(B)). This 
 is not the case for ColumnPaginationFilter as its internal state gets updated 
 depending on whether or not Filter(A) returns TRUE/FALSE for a particular 
 cell.
 * When this Filter is used in combination with other filters (e.g., doing AND 
 with another filter using FilterList), the behavior of the query depends on 
 the order of filters in the FilterList. This is not ideal.
 * ColumnPaginationFilter is a stateful filter which ends up counting multiple 
 versions of the cell as separate values even if another filter upstream or 
 the ScanQueryMatcher is going to reject the value for other reasons.
 Seems like we need a reliable way to do pagination. The particular use case 
 that prompted this JIRA is pagination within the same rowKey. For example, 
 for a given row key R, get columns with prefix P, starting at offset X (among 
 columns which have prefix P) and limit Y. Some possible fixes might be:
 1) enhance ColumnPrefixFilter to support another constructor which supports 
 limit/offset.
 2) Support pagination (limit/offset) at the Scan/Get API level (rather than 
 as a filter) [Like SQL].
 Original Post:
 Thanks Jiakai Liu for reporting this issue and doing the initial 
 investigation. Email from Jiakai below:
 Assuming that we have an index column family with the following entries:
 tag0:001:thread1
 ...
 tag1:001:thread1
 tag1:002:thread2
 ...
 tag1:010:thread10
 ...
 tag2:001:thread1
 tag2:005:thread5
 ...
 To get threads with tag1 in range [5, 10), I tried the following code:
 ColumnPrefixFilter filter1 = new 
 ColumnPrefixFilter(Bytes.toBytes(tag1));
 ColumnPaginationFilter filter2 = new ColumnPaginationFilter(5 /* limit 
 */, 5 /* offset */);
 FilterList filters = new FilterList(Operator.MUST_PASS_ALL);
 filters.addFilter(filter1);
 filters.addFilter(filter2);
 Get get = new Get(USER);
 get.addFamily(COLUMN_FAMILY);
 get.setMaxVersions(1);
 get.setFilter(filters);
 Somehow it didn't work as expected. It returned the entries as if the filter1 
 were not set.
 Turns out the ColumnPrefixFilter returns SEEK_NEXT_USING_HINT in some cases. 
 The FilterList filter does not handle this return code properly (treat it as 
 INCLUDE).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254826#comment-13254826
 ] 

Zhihong Yu commented on HBASE-5795:
---

Will integrate patch v2 in 4 hours if there is no objection.

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0, 0.96.0

 Attachments: 5795-v2.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5733) AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254841#comment-13254841
 ] 

Zhihong Yu commented on HBASE-5733:
---

Minor comment:
Similar sentence appears 3 times below:
{code}
+  LOG.fatal(Problem in getting the children from ZK. Going to abort);
+  master.abort(Problem in getting the children from ZK, new IOException(
+  Failed to get the children from ZK));
+  return;
{code}
Can Failed to get the children from ZK be shared ?

 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.
 -

 Key: HBASE-5733
 URL: https://issues.apache.org/jira/browse/HBASE-5733
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HBASE-5733.patch


 Found while going through the code...
 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE 
 as this is directly iterating the nodes from 
 listChildrenAndWatchForNewChildren with-out checking for null.
 Here also we need to handle with  null  check like other places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254873#comment-13254873
 ] 

Zhihong Yu commented on HBASE-5795:
---

I am open in this regard.
Since the 0.92 deserialization code would be stable (RegionLoad format in 0.92 
shouldn't change), I wonder if manual verification is enough.

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Fix For: 0.94.0, 0.96.0

 Attachments: 5795-v2.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5747) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254907#comment-13254907
 ] 

Zhihong Yu commented on HBASE-5747:
---

Recent Hadoop QA results, 
https://builds.apache.org/job/PreCommit-HBASE-Build/1538/console as an example, 
show the following:
{code}
 -1 core tests.  The patch failed these unit tests:
{code}
I tried to use my script to find the hanging test but wasn't able to.

 Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a 
 subdirectory of target/test
 

 Key: HBASE-5747
 URL: https://issues.apache.org/jira/browse/HBASE-5747
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 5474.txt, 5474v2.txt, 5474v3 (1).txt, 5474v3.txt, 
 5708v4.txt, 5708v4.txt


 Forward port as much as we can of Mikhail's hard-won test cleanups over on 
 0.89 branch  Will improve our being able to run unit tests in //.  He also 
 found a few bugs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5782) Not all the regions are getting assigned after the log splitting.

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254917#comment-13254917
 ] 

Zhihong Yu commented on HBASE-5782:
---

bq. If this is right, then we should pull back HBASE-4487 or add more locks
Adding more locks would take more time to validate / test.

In order to get 0.94.0 out the door, can we pull back HBASE-4487 in 0.94 and 
pursue the locking approach in trunk (or separate branch) ?

 Not all the regions are getting assigned after the log splitting.
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5782) Not all the regions are getting assigned after the log splitting.

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254959#comment-13254959
 ] 

Zhihong Yu commented on HBASE-5782:
---

bq. I'd prefer solution that doesn't add a lock to patch something that's 
broken.
I agree.

I suggest the following actions:
1. pull back HBASE-4487 in 0.94 and trunk
2. a) spend major effort on HBASE-5699 (multiple WALs per region server)
2. b) make HBASE-4487 semantically correct

2.a and 2.b can proceed in parallel. But I think HBASE-5699 is the long-term 
solution.

 Not all the regions are getting assigned after the log splitting.
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5782) Not all the regions are getting assigned after the log splitting.

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254994#comment-13254994
 ] 

Zhihong Yu commented on HBASE-5782:
---

HLog.appendNoSync() is used by 
HRegion.{append|doMiniBatchPut|mutateRowsWithLocks}.
Those methods would be affected when HLog.appendNoSync() is removed.

 Not all the regions are getting assigned after the log splitting.
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5795) HServerLoad$RegionLoad breaks 0.92-0.94 compatibility

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255032#comment-13255032
 ] 

Zhihong Yu commented on HBASE-5795:
---

TestReplication failure isn't related to the patch.

Integrated patch v3 to 0.94 and trunk.

Thanks for finding the bug and providing the test, Stack.

Thanks for the review Stack and Lars.

 HServerLoad$RegionLoad breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Zhihong Yu
 Fix For: 0.94.0, 0.96.0

 Attachments: 5795-v2.txt, 5795-v3.txt, 5795.unittest.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255100#comment-13255100
 ] 

Zhihong Yu commented on HBASE-5780:
---

I couldn't reproduce the test failure on MacBook.

Integrated to 0.94 again.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, 
 TestReplicationPeer-Security-output.log, TestReplicationPeer-output.log, 
 testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255109#comment-13255109
 ] 

Zhihong Yu commented on HBASE-5780:
---

Integrated to 0.92 and trunk as well.

Thanks for the patch, Shaneal.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, 
 TestReplicationPeer-Security-output.log, TestReplicationPeer-output.log, 
 testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5782) Not all the regions are getting assigned after the log splitting.

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255121#comment-13255121
 ] 

Zhihong Yu commented on HBASE-5782:
---

Interesting patch.
How do we measure / compare the following combinations:
1. HLog.appendNoSync() used with one sync thread doing flush
2. HLog.appendNoSync() not used, multiple sync threads doing flush in parallel

 Not all the regions are getting assigned after the log splitting.
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782.txt, HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255181#comment-13255181
 ] 

Zhihong Yu commented on HBASE-5780:
---

In build #122 (https://builds.apache.org/job/HBase-0.94/122/console), the test 
passed:
{code}
Running org.apache.hadoop.hbase.replication.TestReplicationPeer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.67 sec
{code}

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.94.0, 0.96.0

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, 
 TestReplicationPeer-Security-output.log, TestReplicationPeer-output.log, 
 testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5782) Edits can be appended out of seqid order since HBASE-4487

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255211#comment-13255211
 ] 

Zhihong Yu commented on HBASE-5782:
---

{code}
+  synchronized (flushLock) {
+ListEntry pending;

-  // write out all accumulated Entries to hdfs.
-  for (Entry e : pending) {
-writer.append(e);
+synchronized (this) {
{code}
Is the second synchronized needed ? 

 Edits can be appended out of seqid order since HBASE-4487
 -

 Key: HBASE-5782
 URL: https://issues.apache.org/jira/browse/HBASE-5782
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.94.0
Reporter: Gopinathan A
Assignee: ramkrishna.s.vasudevan
Priority: Blocker
 Fix For: 0.94.0

 Attachments: 5782-sketch.txt, 5782.txt, 5782.unfinished-stack.txt, 
 HBASE-5782.patch


 Create a table with 1000 splits, after the region assignemnt, kill the 
 regionserver wich contains META table.
 Here few regions are missing after the log splitting and region assigment. 
 HBCK report shows multiple region holes are got created.
 Same scenario was verified mulitple times in 0.92.1, no issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5732) Remove the SecureRPCEngine and merge the security-related logic in the core engine

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255267#comment-13255267
 ] 

Zhihong Yu commented on HBASE-5732:
---

I got some compilation errors:
{code}
[ERROR] 
/Users/zhihyu/trunk-hbase/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java:[66,45]
 package org.apache.hadoop.hbase.security.token does not exist
[ERROR] 
[ERROR] 
/Users/zhihyu/trunk-hbase/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java:[227,8]
 cannot find symbol
[ERROR] symbol  : variable TokenUtil
[ERROR] location: class org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil
[ERROR] 
[ERROR] 
/Users/zhihyu/trunk-hbase/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java:[175,8]
 cannot find symbol
[ERROR] symbol  : variable TokenUtil
[ERROR] location: class org.apache.hadoop.hbase.mapred.TableMapReduceUtil
[ERROR] 
[ERROR] 
/Users/zhihyu/trunk-hbase/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java:[240,51]
 package AuthenticationTokenIdentifier does not exist
[ERROR] 
[ERROR] 
/Users/zhihyu/trunk-hbase/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java:[241,12]
 cannot find symbol
[ERROR] symbol  : class AuthenticationTokenSelector
[ERROR] location: class org.apache.hadoop.hbase.ipc.HBaseClient
{code}
Still going through the big patch.

 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine
 --

 Key: HBASE-5732
 URL: https://issues.apache.org/jira/browse/HBASE-5732
 Project: HBase
  Issue Type: Improvement
Reporter: Devaraj Das
 Attachments: rpcengine-merge.patch


 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine. Follow up to HBASE-5727.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5732) Remove the SecureRPCEngine and merge the security-related logic in the core engine

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255281#comment-13255281
 ] 

Zhihong Yu commented on HBASE-5732:
---

I put the patch on review board, below are comments for the first page.
For disposeSasl():
{code}
+} catch (IOException ioe) {
+  LOG.info(Error disposing of SASL client, ioe);
+}
{code}
The above log should be at error level.
{code}
+private synchronized boolean setupSaslConnection(final InputStream in2,
+final OutputStream out2)
+throws IOException {
{code}
The 'throws' should be on the same line as parameter 'out2'.
For handleSaslConnectionFailure():
{code}
+  if (ex instanceof RemoteException)
+throw (RemoteException)ex;
+  throw new IOException(ex);
{code}
I think the if statement should check for IOException so that we don't create 
IOException wrapping ex, another IOException.
{code}
+private void writeHeader() throws IOException {
+  // Write out the ConnectionHeader
+  out.writeInt(header.getSerializedSize());
+  header.writeTo(out);
+}
{code}
Do we need to call out.flush() at the end of the above method ?
{code}
+  if (closeException == null) {
+if (!calls.isEmpty()) {
+  LOG.warn(
+  A connection is closed for no cause and calls are not empty);
+
+  // clean up calls anyway
+  closeException = new IOException(Unexpected closed connection);
{code}
Should we record the size of calls in the above warning and IOE ?

 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine
 --

 Key: HBASE-5732
 URL: https://issues.apache.org/jira/browse/HBASE-5732
 Project: HBase
  Issue Type: Improvement
Reporter: Devaraj Das
 Attachments: rpcengine-merge.patch


 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine. Follow up to HBASE-5727.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5732) Remove the SecureRPCEngine and merge the security-related logic in the core engine

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255285#comment-13255285
 ] 

Zhihong Yu commented on HBASE-5732:
---

Thanks for the hint about compilation, Devaraj.
Would it make sense to change security profile to the default profile (insecure 
build doesn't compile) ?

For HBaseServer.setResponse():
{code}
+long hint = ohint.getWritableSize() + Bytes.SIZEOF_INT + 
Bytes.SIZEOF_INT;
{code}
The two Bytes.SIZEOF_INT can be written as Bytes.SIZEOF_INT*2.
{code}
-builder.setError(error != null);
+//builder.setStatus(
{code}
The above comment can be removed.
{code}
-  ByteBuffer bb = buf.getByteBuffer();
-  bb.position(0);
-  this.response = bb;
+  this.response = buf.getByteBuffer();
{code}
Why was the position(0) call removed ?

 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine
 --

 Key: HBASE-5732
 URL: https://issues.apache.org/jira/browse/HBASE-5732
 Project: HBase
  Issue Type: Improvement
Reporter: Devaraj Das
 Attachments: rpcengine-merge.patch


 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine. Follow up to HBASE-5727.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5732) Remove the SecureRPCEngine and merge the security-related logic in the core engine

2012-04-16 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13255320#comment-13255320
 ] 

Zhihong Yu commented on HBASE-5732:
---

On review board, it is obvious where white spaces are introduced.

{code}
+private void wrapWithSasl(ByteBufferOutputStream response)
+throws IOException {
+  if (connection.useSasl) {
{code}
I suggest checking !connection.useSasl so that we can return early - this is 
minor.
{code}
+private void saslReadAndProcess(byte[] saslToken) throws IOException,
+InterruptedException {
+  if (!saslContextEstablished) {
{code}
The else branch starting at line 1313 is much shorter than the if branch. 
Consider handling the saslContextEstablished case first and return. This would 
save indentation for the !saslContextEstablished case.
{code}
+private void disposeSasl() {
+  if (saslServer != null) {
+try {
+  saslServer.dispose();
{code}
Please assign null to saslServer after the dispose() call.
In readAndProcess():
{code}
+  if (dataLength  0) {
+LOG.warn(Unexpected data length  + dataLength + !! from  +
+getHostAddress());
+  }
   data = ByteBuffer.allocate(dataLength);
{code}
When dataLength is negative, the allocate() call would throw 
IllegalArgumentException. It would be nice to change the above LOG.warn() into 
IllegalArgumentException.
{code}
+TokenUtil.obtainTokenForJob(job,UserGroupInformation.getCurrentUser());
{code}
Please add a space after comma.

 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine
 --

 Key: HBASE-5732
 URL: https://issues.apache.org/jira/browse/HBASE-5732
 Project: HBase
  Issue Type: Improvement
Reporter: Devaraj Das
 Attachments: rpcengine-merge.patch


 Remove the SecureRPCEngine and merge the security-related logic in the core 
 engine. Follow up to HBASE-5727.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254334#comment-13254334
 ] 

Zhihong Yu commented on HBASE-2214:
---

'hbase' group should be included in each review request.
Normally patch for trunk is generated first and gets reviewed on review board. 
Hadoop QA checks out patch against trunk.

 Do HBASE-1996 -- setting size to return in scan rather than count of rows -- 
 properly
 -

 Key: HBASE-2214
 URL: https://issues.apache.org/jira/browse/HBASE-2214
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Ferdy Galema
 Fix For: 0.94.1

 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt


 The notion that you set size rather than row count specifying how many rows a 
 scanner should return in each cycle was raised over in hbase-1966.  Its a 
 good one making hbase regular though the data under it may vary.  
 HBase-1966 was committed but the patch was constrained by the fact that it 
 needed to not change RPC interface.  This issue is about doing hbase-1966 for 
 0.21 in a clean, unconstrained way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254335#comment-13254335
 ] 

Zhihong Yu commented on HBASE-2214:
---

I went over https://reviews.apache.org/r/4726 which looks clean.

If adding unit test is difficult, can you perform verification on real clsuter 
and let us know the results ?

 Do HBASE-1996 -- setting size to return in scan rather than count of rows -- 
 properly
 -

 Key: HBASE-2214
 URL: https://issues.apache.org/jira/browse/HBASE-2214
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Ferdy Galema
 Fix For: 0.94.1

 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt


 The notion that you set size rather than row count specifying how many rows a 
 scanner should return in each cycle was raised over in hbase-1966.  Its a 
 good one making hbase regular though the data under it may vary.  
 HBase-1966 was committed but the patch was constrained by the fact that it 
 needed to not change RPC interface.  This issue is about doing hbase-1966 for 
 0.21 in a clean, unconstrained way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5787) Table owner can't disable/delete its own table

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254336#comment-13254336
 ] 

Zhihong Yu commented on HBASE-5787:
---

Thanks for the update.
Can you combine the two patches and run through tests again, including the 
following two ?
{code}
security/src/test//java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java
security/src/test//java/org/apache/hadoop/hbase/security/token/TestZKSecretWatcher.java
{code}

 Table owner can't disable/delete its own table
 --

 Key: HBASE-5787
 URL: https://issues.apache.org/jira/browse/HBASE-5787
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: acl, security
 Attachments: HBASE-5787-tests-wrong-names.patch, HBASE-5787-v0.patch


 An user with CREATE privileges can create a table, but can not disable it, 
 because disable operation require ADMIN privileges. Also if a table is 
 already disabled, anyone can remove it.
 {code}
 public void preDeleteTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   requirePermission(Permission.Action.CREATE);
 }
 public void preDisableTable(ObserverContextMasterCoprocessorEnvironment c,
 byte[] tableName) throws IOException {
   /* TODO: Allow for users with global CREATE permission and the table owner 
 */
   requirePermission(Permission.Action.ADMIN);
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5793) TestHBaseFsck#TestNoHdfsTable test hangs after HBASE-5747

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254369#comment-13254369
 ] 

Zhihong Yu commented on HBASE-5793:
---

TestHBaseFsck passes in PreCommit build #1530.
But there still was some other hanging test(s).

 TestHBaseFsck#TestNoHdfsTable test hangs after HBASE-5747
 -

 Key: HBASE-5793
 URL: https://issues.apache.org/jira/browse/HBASE-5793
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.96.0
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Attachments: hbase-5793.patch


 After the HBASE-5747 modification, this one particular case hangs.
 {code}
 mvn test -PlocalTests -Dtest=TestHBaseFsck
 {code}
 It was hanging on a scan of a table that the test deleted. It expected a call 
 to thrown an exception after a timeout.  HBASE-5747 changed the timeout to a 
 larger number of retries which caused mvn to fail the test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5792) HLog Performance Evaluation Tool

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254371#comment-13254371
 ] 

Zhihong Yu commented on HBASE-5792:
---

Interesting tool.
{code}
+region = HRegion.createHRegion(regionInfo, regionRootDir, getConf(), htd);
{code}
I don't see where region is closed.
There should be some cleanup method after the benchmarking is completed.

 HLog Performance Evaluation Tool
 

 Key: HBASE-5792
 URL: https://issues.apache.org/jira/browse/HBASE-5792
 Project: HBase
  Issue Type: Test
  Components: wal
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: performance, wal
 Attachments: HBASE-5792-v0.patch


 Related to HDFS-3280 and the HBase WAL slowdown on 0.23+
 It would be nice to have a simple tool like HFilePerformanceEvaluation, ...
 to be able to check easily the HLog performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5792) HLog Performance Evaluation Tool

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254426#comment-13254426
 ] 

Zhihong Yu commented on HBASE-5792:
---

Minor comments:
{code}
import org.apache.hadoop.conf.Configuration;
{code}
The above import was not used.
{code}
for (Thread t : threads) t.join();
{code}
join() may throw InterruptedException. Shall we catch it and proceed to the 
next Thread to be joined ?

 HLog Performance Evaluation Tool
 

 Key: HBASE-5792
 URL: https://issues.apache.org/jira/browse/HBASE-5792
 Project: HBase
  Issue Type: Test
  Components: wal
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: performance, wal
 Attachments: HBASE-5792-v0.patch, HBASE-5792-v1.patch


 Related to HDFS-3280 and the HBase WAL slowdown on 0.23+
 It would be nice to have a simple tool like HFilePerformanceEvaluation, ...
 to be able to check easily the HLog performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5792) HLog Performance Evaluation Tool

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254436#comment-13254436
 ] 

Zhihong Yu commented on HBASE-5792:
---

I wonder if it makes sense to persist benchmark results to HBase.
That may show us some trend w.r.t. HLog performance.

 HLog Performance Evaluation Tool
 

 Key: HBASE-5792
 URL: https://issues.apache.org/jira/browse/HBASE-5792
 Project: HBase
  Issue Type: Test
  Components: wal
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
  Labels: performance, wal
 Attachments: HBASE-5792-v0.patch, HBASE-5792-v1.patch


 Related to HDFS-3280 and the HBase WAL slowdown on 0.23+
 It would be nice to have a simple tool like HFilePerformanceEvaluation, ...
 to be able to check easily the HLog performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5796) Fix our abuse of IOE: see http://blog.tsunanet.net/2012/04/apache-hadoop-abuse-ioexception.html

2012-04-15 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254476#comment-13254476
 ] 

Zhihong Yu commented on HBASE-5796:
---

bq. all in a {{HadoopIOException}}, which would inherit from {{HBaseException}}.
I am afraid that some people would not feel comfortable with the above 
inheritance.

 Fix our abuse of IOE: see 
 http://blog.tsunanet.net/2012/04/apache-hadoop-abuse-ioexception.html
 ---

 Key: HBASE-5796
 URL: https://issues.apache.org/jira/browse/HBASE-5796
 Project: HBase
  Issue Type: Task
Reporter: stack

 Lets make more context particular exceptions rather than throw IOEs 
 everywhere.  See Benoît's rant: 
 http://blog.tsunanet.net/2012/04/apache-hadoop-abuse-ioexception.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254114#comment-13254114
 ] 

Zhihong Yu commented on HBASE-5780:
---

Integrated to 0.94 branch.

Waiting for 0.92 and trunk builds to pass before further integration.

Thanks for the patch, Shaneal.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-2214) Do HBASE-1996 -- setting size to return in scan rather than count of rows -- properly

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254116#comment-13254116
 ] 

Zhihong Yu commented on HBASE-2214:
---

{code}
+   * @return the maximum buffer size in bytes. Also see 
+   * {@link #setMaxBufferSize(int)}
{code}
'Also see' - 'See also'. The parameter type for setMaxBufferSize() should be 
long.
Since line length is 100 now, you can combine the two lines together.

On review board, it is clear to see the white spaces in the patch.

Trunk build is currently broken, please submit patch for Hadoop QA when trunk 
build passes.

 Do HBASE-1996 -- setting size to return in scan rather than count of rows -- 
 properly
 -

 Key: HBASE-2214
 URL: https://issues.apache.org/jira/browse/HBASE-2214
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Ferdy Galema
 Fix For: 0.94.0

 Attachments: HBASE-2214-0.94.txt, HBASE-2214_with_broken_TestShell.txt


 The notion that you set size rather than row count specifying how many rows a 
 scanner should return in each cycle was raised over in hbase-1966.  Its a 
 good one making hbase regular though the data under it may vary.  
 HBase-1966 was committed but the patch was constrained by the fact that it 
 needed to not change RPC interface.  This issue is about doing hbase-1966 for 
 0.21 in a clean, unconstrained way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254132#comment-13254132
 ] 

Zhihong Yu commented on HBASE-5780:
---

TestReplicationPeer#testResetZooKeeperSession failed on Jenkins and locally (on 
MacBook).
I reverted the patch from 0.94 for further investigation.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.92.2, 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5778) Turn on WAL compression by default

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254136#comment-13254136
 ] 

Zhihong Yu commented on HBASE-5778:
---

I think ReplicationSource now has the additional responsibility of shipping 
dictionaries to replication sink.
We just need to find a clean way of exposing 
SequenceFileLogWriter.compressionContext to ReplicationSource.

 Turn on WAL compression by default
 --

 Key: HBASE-5778
 URL: https://issues.apache.org/jira/browse/HBASE-5778
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Daniel Cryans
Assignee: Lars Hofhansl
Priority: Blocker
 Fix For: 0.96.0, 0.94.1

 Attachments: 5778-addendum.txt, 5778.addendum, HBASE-5778.patch


 I ran some tests to verify if WAL compression should be turned on by default.
 For a use case where it's not very useful (values two order of magnitude 
 bigger than the keys), the insert time wasn't different and the CPU usage 15% 
 higher (150% CPU usage VS 130% when not compressing the WAL).
 When values are smaller than the keys, I saw a 38% improvement for the insert 
 run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
 WAL compression accounts for all the additional CPU usage, it might just be 
 that we're able to insert faster and we spend more time in the MemStore per 
 second (because our MemStores are bad when they contain tens of thousands of 
 values).
 Those are two extremes, but it shows that for the price of some CPU we can 
 save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
 CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5747) Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a subdirectory of target/test

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254188#comment-13254188
 ] 

Zhihong Yu commented on HBASE-5747:
---

In build 2757, TestSchemaMetrics hung:
{code}
Running org.apache.hadoop.hbase.regionserver.metrics.TestSchemaMetrics
Running org.apache.hadoop.hbase.regionserver.TestParallelPut
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.674 sec
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.674 sec
{code}

 Forward port hbase-5708 [89-fb] Make MiniMapRedCluster directory a 
 subdirectory of target/test
 

 Key: HBASE-5747
 URL: https://issues.apache.org/jira/browse/HBASE-5747
 Project: HBase
  Issue Type: Task
Reporter: stack
Assignee: stack
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 5474.txt, 5474v2.txt, 5474v3 (1).txt, 5474v3.txt, 
 5708v4.txt, 5708v4.txt


 Forward port as much as we can of Mikhail's hard-won test cleanups over on 
 0.89 branch  Will improve our being able to run unit tests in //.  He also 
 found a few bugs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5256) Use WritableUtils.readVInt() in RegionLoad.readFields()

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254228#comment-13254228
 ] 

Zhihong Yu commented on HBASE-5256:
---

I agree this is not needed for 0.94

 Use WritableUtils.readVInt() in RegionLoad.readFields()
 ---

 Key: HBASE-5256
 URL: https://issues.apache.org/jira/browse/HBASE-5256
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Mubarak Seyed
 Fix For: 0.94.0

 Attachments: HBASE-5256.trunk.v1.patch


 Currently in.readInt() is used in RegionLoad.readFields()
 More metrics would be added to RegionLoad in the future, we should utilize 
 WritableUtils.readVInt() to reduce the amount of data exchanged between 
 Master and region servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5795) hbase-3927 breaks 0.92-0.94 compatibility

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254243#comment-13254243
 ] 

Zhihong Yu commented on HBASE-5795:
---

+1 on introducing HSL92 for backward compatibility. 

 hbase-3927 breaks 0.92-0.94 compatibility
 ---

 Key: HBASE-5795
 URL: https://issues.apache.org/jira/browse/HBASE-5795
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: stack
 Attachments: 5794.txt


 This commit broke our 0.92/0.94 compatibility:
 {code}
 
 r1136686 | stack | 2011-06-16 14:18:08 -0700 (Thu, 16 Jun 2011) | 1 line
 HBASE-3927 display total uncompressed byte size of a region in web UI
 {code}
 I just tried the new RC for 0.94.  I brought up a 0.94 master on a 0.92 
 cluster and rather than just digest version 1 of the HServerLoad, I get this:
 {code}
 2012-04-14 22:47:59,752 WARN org.apache.hadoop.ipc.HBaseServer: Unable to 
 read call parameters for client 10.4.14.38
 java.io.IOException: Error in readFields
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:684)
 at 
 org.apache.hadoop.hbase.ipc.Invocation.readFields(Invocation.java:125)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:1269)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:1184)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:722)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.doRunLoop(HBaseServer.java:513)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader.run(HBaseServer.java:488)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: A record version mismatch occured. Expecting v2, found v1
 at 
 org.apache.hadoop.io.VersionedWritable.readFields(VersionedWritable.java:46)
 at 
 org.apache.hadoop.hbase.HServerLoad$RegionLoad.readFields(HServerLoad.java:379)
 at 
 org.apache.hadoop.hbase.HServerLoad.readFields(HServerLoad.java:686)
 at 
 org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:681)
 ... 9 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254258#comment-13254258
 ] 

Zhihong Yu commented on HBASE-5620:
---

Addendum makes sense.

TestAccessControlFilter passes with the addendum.
Whole security test suite should be run.

 Convert the client protocol of HRegionInterface to PB
 -

 Key: HBASE-5620
 URL: https://issues.apache.org/jira/browse/HBASE-5620
 Project: HBase
  Issue Type: Sub-task
  Components: ipc, master, migration, regionserver
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5620-sec.patch, hbase-5620_v3.patch, 
 hbase-5620_v4.patch, hbase-5620_v4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5256) Use WritableUtils.readVInt() in RegionLoad.readFields()

2012-04-14 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13254267#comment-13254267
 ] 

Zhihong Yu commented on HBASE-5256:
---

See discussion on HBASE-5795.

 Use WritableUtils.readVInt() in RegionLoad.readFields()
 ---

 Key: HBASE-5256
 URL: https://issues.apache.org/jira/browse/HBASE-5256
 Project: HBase
  Issue Type: Task
Reporter: Zhihong Yu
Assignee: Mubarak Seyed
 Fix For: 0.94.0

 Attachments: HBASE-5256.trunk.v1.patch


 Currently in.readInt() is used in RegionLoad.readFields()
 More metrics would be added to RegionLoad in the future, we should utilize 
 WritableUtils.readVInt() to reduce the amount of data exchanged between 
 Master and region servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253564#comment-13253564
 ] 

Zhihong Yu commented on HBASE-1936:
---

The patch is of decent size. Please upload to review board.

Why does callIsATriggeringClass() use reflection to call isATriggeringClass() ?

Can you add some tests ?

Thanks

 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jieshan Bean
  Labels: noob
 Attachments: HBASE-1936-trunk(forReview).patch, cp_from_hdfs.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5604) M/R tool to replay WAL files

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253607#comment-13253607
 ] 

Zhihong Yu commented on HBASE-5604:
---

Looking at test failure reported by Hadoop QA: 
https://builds.apache.org/job/PreCommit-HBASE-Build/1514//testReport/org.apache.hadoop.hbase.mapreduce/TestWALPlayer/testTimeFormat/
{code}
java.lang.AssertionError: expected:1334092861001 but was:1334067661001
{code}
I wonder if timezone could be an issue here - the difference is 7 hours.

If you don't want to involve call such as 
setTimeZone(TimeZone.getTimeZone(“America/Los_Angeles”)), please comment out:
{code}
assertEquals(1334092861001L, conf.getLong(HLogInputFormat.END_TIME_KEY, 0));
{code}

 M/R tool to replay WAL files
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0, 0.96.0

 Attachments: 5604-v10.txt, 5604-v11.txt, 5604-v4.txt, 5604-v6.txt, 
 5604-v7.txt, 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253781#comment-13253781
 ] 

Zhihong Yu commented on HBASE-5620:
---

Looks like trunk build failed with:
{code}
[ERROR] Failed to execute goal org.apache.rat:apache-rat-plugin:0.8:check 
(default) on project hbase: Too many unapproved licenses: 1 - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.rat:apache-rat-plugin:0.8:check (default) on project hbase: Too many 
unapproved licenses: 1
{code}


 Convert the client protocol of HRegionInterface to PB
 -

 Key: HBASE-5620
 URL: https://issues.apache.org/jira/browse/HBASE-5620
 Project: HBase
  Issue Type: Sub-task
  Components: ipc, master, migration, regionserver
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, 
 hbase-5620_v4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5604) M/R tool to replay WAL files

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253797#comment-13253797
 ] 

Zhihong Yu commented on HBASE-5604:
---

See the following code from 
http://stackoverflow.com/questions/2375222/java-simpledateformat-for-time-zone-with-a-colon-seperator:
{code}
String dateString = 2010-03-01T00:00:00-08:00;
String pattern = -MM-dd'T'HH:mm:ss;
SimpleDateFormat sdf = new SimpleDateFormat(pattern);
Date date = sdf.parse(dateString);
{code}

 M/R tool to replay WAL files
 

 Key: HBASE-5604
 URL: https://issues.apache.org/jira/browse/HBASE-5604
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.94.0, 0.96.0

 Attachments: 5604-v10.txt, 5604-v11.txt, 5604-v4.txt, 5604-v6.txt, 
 5604-v7.txt, 5604-v8.txt, 5604-v9.txt, HLog-5604-v3.txt


 Just an idea I had. Might be useful for restore of a backup using the HLogs.
 This could an M/R (with a mapper per HLog file).
 The tool would get a timerange and a (set of) table(s). We'd pick the right 
 HLogs based on time before the M/R job is started and then have a mapper per 
 HLog file.
 The mapper would then go through the HLog, filter all WALEdits that didn't 
 fit into the time range or are not any of the tables and then uses 
 HFileOutputFormat to generate HFiles.
 Would need to indicate the splits we want, probably from a live table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5620) Convert the client protocol of HRegionInterface to PB

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253802#comment-13253802
 ] 

Zhihong Yu commented on HBASE-5620:
---

Here is the command Jenkins used:
{code}
[trunk] $ /home/hudson/tools/maven/latest3/bin/mvn -e -X clean 
-Dmaven.test.redirectTestOutputToFile=true site install assembly:single 
-DskipITs -Prelease
{code}

 Convert the client protocol of HRegionInterface to PB
 -

 Key: HBASE-5620
 URL: https://issues.apache.org/jira/browse/HBASE-5620
 Project: HBase
  Issue Type: Sub-task
  Components: ipc, master, migration, regionserver
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: hbase-5620_v3.patch, hbase-5620_v4.patch, 
 hbase-5620_v4.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253914#comment-13253914
 ] 

Zhihong Yu commented on HBASE-5780:
---

I don't see attachment, for now.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5780) Fix race in HBase regionserver startup vs ZK SASL authentication

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253916#comment-13253916
 ] 

Zhihong Yu commented on HBASE-5780:
---

Test results look good.
Will integrate tomorrow if there is no objection.

 Fix race in HBase regionserver startup vs ZK SASL authentication
 

 Key: HBASE-5780
 URL: https://issues.apache.org/jira/browse/HBASE-5780
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.92.1, 0.94.0
Reporter: Shaneal Manek
Assignee: Shaneal Manek
 Fix For: 0.96.0, 0.94.1

 Attachments: HBASE-5780-v2.patch, HBASE-5780.patch, testoutput.tar.gz


 Secure RegionServers sometimes fail to start with the following backtrace:
 2012-03-22 17:20:16,737 FATAL 
 org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server 
 centos60-20.ent.cloudera.com,60020,1332462015929: Unexpected exception during 
 initialization, aborting
 org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = 
 NoAuth for /hbase/shutdown
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
 at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1131)
 at 
 org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:295)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataInternal(ZKUtil.java:518)
 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getDataAndWatch(ZKUtil.java:494)
 at 
 org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:569)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:532)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:634)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5776) HTableMultiplexer

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253976#comment-13253976
 ] 

Zhihong Yu commented on HBASE-5776:
---

Thanks for the explanation, Kannan.

Since the Multiplexer isn't tied to any single table and it may support get's 
in the future, shall we remove the 'Table' in the class name ?

 HTableMultiplexer 
 --

 Key: HBASE-5776
 URL: https://issues.apache.org/jira/browse/HBASE-5776
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Assignee: Liyin Tang
 Attachments: D2775.1.patch, D2775.1.patch, D2775.2.patch, 
 D2775.2.patch


 There is a known issue in HBase client that single slow/dead region server 
 could slow down the multiput operations across all the region servers. So the 
 HBase client will be as slow as the slowest region server in the cluster. 
  
 To solve this problem, HTableMultiplexer will separate the multiput 
 submitting threads with the flush threads, which means the multiput operation 
 will be a nonblocking operation. 
 The submitting thread will shard all the puts into different queues based on 
 its destination region server and return immediately. The flush threads will 
 flush these puts from each queue to its destination region server. 
 Currently the HTableMultiplexer only supports the put operation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5790) ZKUtil deleteRecurisively should be a recoverable operation

2012-04-13 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13253984#comment-13253984
 ] 

Zhihong Yu commented on HBASE-5790:
---

{code}
+   * Recursively delete the path all children on that path.
{code}
'the path all' - 'all the'.
{code}
+  retryOrThrow(retryCounter, e, delete);
{code}
delete should be deleteRecursively.
{code}
+  private void addAllChildrenToDeleteTransaction(Transaction trans, String 
path, int version)
{code}
I don't see how version is used to filter child nodes in the above method.


 ZKUtil deleteRecurisively should be a recoverable operation
 ---

 Key: HBASE-5790
 URL: https://issues.apache.org/jira/browse/HBASE-5790
 Project: HBase
  Issue Type: Improvement
Reporter: Jesse Yates
Assignee: Jesse Yates
  Labels: zookeeper
 Fix For: 0.96.0, 0.94.1

 Attachments: java_HBASE-5790.patch


 As of 3.4.3 Zookeeper now has full, multi-operation transaction. This means 
 we can wholesale delete chunks of the zk tree and ensure that we don't have 
 any pesky recursive delete issues where we delete the children of a node, but 
 then a child joins before deletion of the parent. Even without transactions, 
 this should be the behavior, but it is possible to make it much cleaner now 
 that we have this new feature in zk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5767) Add the hbase shell table_att for any attribute

2012-04-11 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13251650#comment-13251650
 ] 

Zhihong Yu commented on HBASE-5767:
---

@Xing:
Can you add a unit test for your patch ?

 Add the hbase shell table_att for any attribute
 ---

 Key: HBASE-5767
 URL: https://issues.apache.org/jira/browse/HBASE-5767
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Xing Shi
Priority: Minor
 Attachments: HBASE-5767.patch


 Now the HTableDescriptor supports setValue(String key, String value) method, 
 but the hbase shell not support it.
 May be like this:
 {quota}
 hbase(main):003:0 alter 'test', METHOD='table_att', 'key1'='value1'
 Updating all regions with the new schema...
 1/1 regions updated.
 Done.
 0 row(s) in 1.0820 seconds
 hbase(main):005:0 describe 'test'
 DESCRIPTION   
 ENABLED  
  {NAME = 'test', key1 = 'value1', FAMILIES = [{NAME = 'f1', BLOOMFILTER 
 = 'NONE', RE true 
  PLICATION_SCOPE = '0', VERSIONS = '3', COMPRESSION = 'NONE', MIN_VERSIONS 
 = '0', TTL  
   = '2147483647', BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE = 
 'true'}]} 
 1 row(s) in 0.0300 seconds
 hbase(main):007:0 alter 'test', METHOD='table_att_unset', NAME='key1'
 Updating all regions with the new schema...
 1/1 regions updated.
 Done.
 0 row(s) in 1.0860 seconds
 hbase(main):008:0 describe 'test'
 DESCRIPTION   
 ENABLED  
  {NAME = 'test', FAMILIES = [{NAME = 'f1', BLOOMFILTER = 'NONE', 
 REPLICATION_SCOPE = false
   '0', VERSIONS = '3', COMPRESSION = 'NONE', MIN_VERSIONS = '0', TTL = 
 '2147483647',   
  BLOCKSIZE = '65536', IN_MEMORY = 'false', BLOCKCACHE = 'true'}]}  
  
 1 row(s) in 0.0280 seconds
 {quota}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5547) Don't delete HFiles when in backup mode

2012-04-11 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13251735#comment-13251735
 ] 

Zhihong Yu commented on HBASE-5547:
---

bq. However, I would be ok if we added a method that cleaned up the filesystem 
for files created in the backup after the archiving started.
I think we should do the cleanup if archiving cannot complete.
If a new method in master interface is needed, we can accommodate that.

 Don't delete HFiles when in backup mode
 -

 Key: HBASE-5547
 URL: https://issues.apache.org/jira/browse/HBASE-5547
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Jesse Yates

 This came up in a discussion I had with Stack.
 It would be nice if HBase could be notified that a backup is in progress (via 
 a znode for example) and in that case either:
 1. rename HFiles to be delete to file.bck
 2. rename the HFiles into a special directory
 3. rename them to a general trash directory (which would not need to be tied 
 to backup mode).
 That way it should be able to get a consistent backup based on HFiles (HDFS 
 snapshots or hard links would be better options here, but we do not have 
 those).
 #1 makes cleanup a bit harder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5547) Don't delete HFiles when in backup mode

2012-04-11 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13251769#comment-13251769
 ] 

Zhihong Yu commented on HBASE-5547:
---

For the recovery cleanup, can we remember the start time of archiving and only 
delete archive files which are written after this start time ?
I like the /hbase/.archive/[table] structure.

 Don't delete HFiles when in backup mode
 -

 Key: HBASE-5547
 URL: https://issues.apache.org/jira/browse/HBASE-5547
 Project: HBase
  Issue Type: New Feature
Reporter: Lars Hofhansl
Assignee: Jesse Yates

 This came up in a discussion I had with Stack.
 It would be nice if HBase could be notified that a backup is in progress (via 
 a znode for example) and in that case either:
 1. rename HFiles to be delete to file.bck
 2. rename the HFiles into a special directory
 3. rename them to a general trash directory (which would not need to be tied 
 to backup mode).
 That way it should be able to get a consistent backup based on HFiles (HDFS 
 snapshots or hard links would be better options here, but we do not have 
 those).
 #1 makes cleanup a bit harder.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-11 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13252007#comment-13252007
 ] 

Zhihong Yu commented on HBASE-5677:
---

I saw the following in org.apache.hadoop.hbase.mapreduce.TestImportTsv.txt when 
I tested HBASE-5741 in 0.94, with the proposed patch in place:
{code}
Caused by: java.lang.RuntimeException: Master not initialized after 200 seconds
  at 
org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:206)
  at 
org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:422)
  at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:196)
{code}

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng
 Fix For: 0.90.7, 0.92.2, 0.94.0, 0.96.0

 Attachments: 5677-proposal.txt, HBASE-5677-90-v1.patch, 
 surefire-report_no_patched_v1.html, surefire-report_patched_v1.html


 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13252038#comment-13252038
 ] 

Zhihong Yu commented on HBASE-5741:
---

I would integrate patches to trunk and 0.94 tomorrow morning if there is no 
objection.

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.94.0, 0.96.0

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5741) ImportTsv does not check for table existence

2012-04-11 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13252071#comment-13252071
 ] 

Zhihong Yu commented on HBASE-5741:
---

How about adding an option to ImportTsv, default to false, allowing user to 
auto-create the table ?

 ImportTsv does not check for table existence 
 -

 Key: HBASE-5741
 URL: https://issues.apache.org/jira/browse/HBASE-5741
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.90.4
Reporter: Clint Heath
Assignee: Himanshu Vashishtha
 Fix For: 0.94.0, 0.96.0

 Attachments: 5741-94.txt, 5741-v3.txt, HBase-5741-v2.patch, 
 HBase-5741.patch


 The usage statement for the importtsv command to hbase claims this:
 Note: if you do not use this option, then the target table must already 
 exist in HBase (in reference to the importtsv.bulk.output command-line 
 option)
 The truth is, the table must exist no matter what, importtsv cannot and will 
 not create it for you.
 This is the case because the createSubmittableJob method of ImportTsv does 
 not even attempt to check if the table exists already, much less create it:
 (From org.apache.hadoop.hbase.mapreduce.ImportTsv.java)
 305 HTable table = new HTable(conf, tableName);
 The HTable method signature in use there assumes the table exists and runs a 
 meta scan on it:
 (From org.apache.hadoop.hbase.client.HTable.java)
 142 * Creates an object to access a HBase table.
 ...
 151 public HTable(Configuration conf, final String tableName)
 What we should do inside of createSubmittableJob is something similar to what 
 the completebulkloads command would do:
 (Taken from org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.java)
 690 boolean tableExists = this.doesTableExist(tableName);
 691 if (!tableExists) this.createTable(tableName,dirPath);
 Currently the docs are misleading, the table in fact must exist prior to 
 running importtsv. We should check if it exists rather than assume it's 
 already there and throw the below exception:
 12/03/14 17:15:42 WARN client.HConnectionManager$HConnectionImplementation: 
 Encountered problems when prefetch META table: 
 org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
 table: myTable2, row=myTable2,,99
   at 
 org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:150)
 ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5759) HBaseClient throws NullPointerException when EOFException should be used.

2012-04-10 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13250817#comment-13250817
 ] 

Zhihong Yu commented on HBASE-5759:
---

Patch makes sense.

 HBaseClient throws NullPointerException when EOFException should be used.
 -

 Key: HBASE-5759
 URL: https://issues.apache.org/jira/browse/HBASE-5759
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Trivial
 Attachments: hbase-5759.patch


 When a RPC data input stream is closed, protobuf doesn't raise an 
 EOFException, it returns a null RpcResponse object.
 We need to check if the response is null before trying to access it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5733) AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.

2012-04-10 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13250848#comment-13250848
 ] 

Zhihong Yu commented on HBASE-5733:
---

We should retry in this scenario.

 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE.
 -

 Key: HBASE-5733
 URL: https://issues.apache.org/jira/browse/HBASE-5733
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.96.0
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G

 Found while going through the code...
 AssignmentManager#processDeadServersAndRegionsInTransition can fail with NPE 
 as this is directly iterating the nodes from 
 listChildrenAndWatchForNewChildren with-out checking for null.
 Here also we need to handle with  null  check like other places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5737) Minor Improvements related to balancer.

2012-04-10 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13250930#comment-13250930
 ] 

Zhihong Yu commented on HBASE-5737:
---

There is one more whitespace @ line 2986.
Please keep HashMap for now. For the case of two tables cited above, there is 
time limit on the actual region movement per balance() call. Meaning the 
balancing of the two tables may be completed in several iterations.

 Minor Improvements related to balancer.
 ---

 Key: HBASE-5737
 URL: https://issues.apache.org/jira/browse/HBASE-5737
 Project: HBase
  Issue Type: Improvement
  Components: master
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Attachments: HBASE-5737.patch, HBASE-5737_1.patch


 Currently in Am.getAssignmentByTable()  we use a result map which is currenly 
 a hashmap.  It could be better if we have a treeMap.  Even in 
 MetaReader.fullScan we have the treeMap only so that we have the naming order 
 maintained. I felt this change could be very useful in cases where we are 
 extending the DefaultLoadBalancer.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5684) Make ProcessBasedLocalHBaseCluster run HDFS and make it more robust

2012-04-10 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13251162#comment-13251162
 ] 

Zhihong Yu commented on HBASE-5684:
---

bq. do you mean Unit Tests Skipped
Yes.

 Make ProcessBasedLocalHBaseCluster run HDFS and make it more robust
 ---

 Key: HBASE-5684
 URL: https://issues.apache.org/jira/browse/HBASE-5684
 Project: HBase
  Issue Type: Improvement
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin
 Attachments: D2709.1.patch, D2709.2.patch, D2709.3.patch


 Currently ProcessBasedLocalHBaseCluster runs on top of raw local filesystem. 
 We need it to start a process-based HDFS cluster as well. We also need to 
 make the whole thing more stable so we can use it in unit tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4676) Prefix Compression - Trie data block encoding

2012-04-10 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13251272#comment-13251272
 ] 

Zhihong Yu commented on HBASE-4676:
---

Thanks for the update, Matt.

PrefixTrieDataBlockEncoder is currently in the jar.
If PrefixTrie related java classes are in the patch, that would make 
understanding your implementation easier.

 Prefix Compression - Trie data block encoding
 -

 Key: HBASE-4676
 URL: https://issues.apache.org/jira/browse/HBASE-4676
 Project: HBase
  Issue Type: New Feature
  Components: io, performance, regionserver
Affects Versions: 0.90.6
Reporter: Matt Corgan
 Attachments: HBASE-4676-0.94-v1.patch, PrefixTrie_Format_v1.pdf, 
 PrefixTrie_Performance_v1.pdf, SeeksPerSec by blockSize.png, 
 hbase-prefix-trie-0.1.jar


 The HBase data block format has room for 2 significant improvements for 
 applications that have high block cache hit ratios.  
 First, there is no prefix compression, and the current KeyValue format is 
 somewhat metadata heavy, so there can be tremendous memory bloat for many 
 common data layouts, specifically those with long keys and short values.
 Second, there is no random access to KeyValues inside data blocks.  This 
 means that every time you double the datablock size, average seek time (or 
 average cpu consumption) goes up by a factor of 2.  The standard 64KB block 
 size is ~10x slower for random seeks than a 4KB block size, but block sizes 
 as small as 4KB cause problems elsewhere.  Using block sizes of 256KB or 1MB 
 or more may be more efficient from a disk access and block-cache perspective 
 in many big-data applications, but doing so is infeasible from a random seek 
 perspective.
 The PrefixTrie block encoding format attempts to solve both of these 
 problems.  Some features:
 * trie format for row key encoding completely eliminates duplicate row keys 
 and encodes similar row keys into a standard trie structure which also saves 
 a lot of space
 * the column family is currently stored once at the beginning of each block.  
 this could easily be modified to allow multiple family names per block
 * all qualifiers in the block are stored in their own trie format which 
 caters nicely to wide rows.  duplicate qualifers between rows are eliminated. 
  the size of this trie determines the width of the block's qualifier 
 fixed-width-int
 * the minimum timestamp is stored at the beginning of the block, and deltas 
 are calculated from that.  the maximum delta determines the width of the 
 block's timestamp fixed-width-int
 The block is structured with metadata at the beginning, then a section for 
 the row trie, then the column trie, then the timestamp deltas, and then then 
 all the values.  Most work is done in the row trie, where every leaf node 
 (corresponding to a row) contains a list of offsets/references corresponding 
 to the cells in that row.  Each cell is fixed-width to enable binary 
 searching and is represented by [1 byte operationType, X bytes qualifier 
 offset, X bytes timestamp delta offset].
 If all operation types are the same for a block, there will be zero per-cell 
 overhead.  Same for timestamps.  Same for qualifiers when i get a chance.  
 So, the compression aspect is very strong, but makes a few small sacrifices 
 on VarInt size to enable faster binary searches in trie fan-out nodes.
 A more compressed but slower version might build on this by also applying 
 further (suffix, etc) compression on the trie nodes at the cost of slower 
 write speed.  Even further compression could be obtained by using all VInts 
 instead of FInts with a sacrifice on random seek speed (though not huge).
 One current drawback is the current write speed.  While programmed with good 
 constructs like TreeMaps, ByteBuffers, binary searches, etc, it's not 
 programmed with the same level of optimization as the read path.  Work will 
 need to be done to optimize the data structures used for encoding and could 
 probably show a 10x increase.  It will still be slower than delta encoding, 
 but with a much higher decode speed.  I have not yet created a thorough 
 benchmark for write speed nor sequential read speed.
 Though the trie is reaching a point where it is internally very efficient 
 (probably within half or a quarter of its max read speed) the way that hbase 
 currently uses it is far from optimal.  The KeyValueScanner and related 
 classes that iterate through the trie will eventually need to be smarter and 
 have methods to do things like skipping to the next row of results without 
 scanning every cell in between.  When that is accomplished it will also allow 
 much faster compactions because the full row key will not have to be compared 
 as often as it is now.
 Current code is on github.  The trie 

[jira] [Commented] (HBASE-5677) The master never does balance because duplicate openhandled the one region

2012-04-09 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13249821#comment-13249821
 ] 

Zhihong Yu commented on HBASE-5677:
---

There is this method in HMasterInterface:
{code}
  /** @return true if master is available */
  public boolean isMasterRunning();
{code}
If we introduce isMasterAvailable(), that would create confusion, right ?

 The master never does balance because duplicate openhandled the one region
 --

 Key: HBASE-5677
 URL: https://issues.apache.org/jira/browse/HBASE-5677
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.6
 Environment: 0.90
Reporter: xufeng
Assignee: xufeng

 If region be assigned When the master is doing initialization(before do 
 processFailover),the region will be duplicate openhandled.
 because the unassigned node in zookeeper will be handled again in 
 AssignmentManager#processFailover()
 it cause the region in RIT,thus the master never does balance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5723) Simple Design of Secondary Index

2012-04-09 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13249907#comment-13249907
 ] 

Zhihong Yu commented on HBASE-5723:
---

@Todd:
Do you think the attached proposal belongs to HBASE-3340 ?

 Simple Design of Secondary Index
 

 Key: HBASE-5723
 URL: https://issues.apache.org/jira/browse/HBASE-5723
 Project: HBase
  Issue Type: New Feature
  Components: coprocessors
Reporter: ShiXing
Priority: Minor
 Attachments: Simple Design of HBase SecondaryIndex.pdf


 Use coprocessor to create index. And primary tables' compaction to purge the 
 stale data. 
 Attach file is the Design of the Seconday Index.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-5727) secure hbase build broke because of 'HBASE-5451 Switch RPC call envelope/headers to PBs'

2012-04-09 Thread Zhihong Yu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13249954#comment-13249954
 ] 

Zhihong Yu commented on HBASE-5727:
---

Nicolas checked in HBASE-5335 a little earlier than Stack checked in this patch.
So we now have the following compilation failure:
{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.0.2:compile (default-compile) 
on project hbase: Compilation failure: Compilation failure:
[ERROR] 
/Users/zhihyu/trunk-hbase/security/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java:[194,23]
 cannot find symbol
[ERROR] symbol  : method getConf()
[ERROR] location: class org.apache.hadoop.hbase.regionserver.HRegion
{code}

 secure hbase build broke because of 'HBASE-5451 Switch RPC call 
 envelope/headers to PBs'
 

 Key: HBASE-5727
 URL: https://issues.apache.org/jira/browse/HBASE-5727
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Devaraj Das
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 5727.1.patch, 5727.2.patch, 5727.patch


 If you build with the security profile -- i.e. add '-P security' on the 
 command line -- you'll see that the secure build is broke since we messed in 
 rpc.
 Assigning Deveraj to take a look.   If you can't work on this now DD, just 
 give it back to me and I'll have a go at it.  Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   4   5   6   7   8   9   10   >