[jira] [Commented] (HBASE-12270) A bug in the bucket cache, with cache blocks on write enabled

2014-10-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173411#comment-14173411
 ] 

Devaraj Das commented on HBASE-12270:
-

[~ndimiduk], mind taking a look at this?

 A bug in the bucket cache, with cache blocks on write enabled
 -

 Key: HBASE-12270
 URL: https://issues.apache.org/jira/browse/HBASE-12270
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
 Environment: I can reproduce it on a simple 2 node cluster, one 
 running the master and another running a RS. I was testing on ec2.
 I used the following configurations for the cluster. 
 hbase-env:HBASE_REGIONSERVER_OPTS=-Xmx2G -XX:MaxDirectMemorySize=5G 
 -XX:CMSInitiatingOccupancyFraction=88 -XX:+AggressiveOpts -verbose:gc 
 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xlog 
 gc:/tmp/hbase-regionserver-gc.log
 hbase-site:
 hbase.bucketcache.ioengine=offheap
 hbase.bucketcache.size=4196
 hbase.rs.cacheblocksonwrite=true
 hfile.block.index.cacheonwrite=true
 hfile.block.bloom.cacheonwrite=true
Reporter: Khaled Elmeleegy
Priority: Critical
 Attachments: TestHBase.java, TestKey.java


 In my experiments, I have writers streaming their output to HBase. The reader 
 powers a web page and does this scatter/gather, where it reads 1000 keys 
 written last and passes them the the front end. With this workload, I get the 
 exception below at the region server. Again, I am using HBAse (0.98.6.1). Any 
 help is appreciated.
 2014-10-10 15:06:44,173 ERROR 
 [B.DefaultRpcServer.handler=62,queue=2,port=60020] ipc.RpcServer: Unexpected 
 throwable object 
 java.lang.IllegalArgumentException
   at java.nio.Buffer.position(Buffer.java:236)
  at 
 org.apache.hadoop.hbase.util.ByteBufferUtils.skip(ByteBufferUtils.java:434)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:849)
   at 
 org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:760)
  at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:248)
at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152)
   at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:317)
  at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:176)
   at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:1780)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.init(HRegion.java:3758)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1950)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1936)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1913)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3157)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29587)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2027)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
  at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
  at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173440#comment-14173440
 ] 

Hadoop QA commented on HBASE-12229:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675224/HBASE-12229_master_v1.patch
  against trunk revision .
  ATTACHMENT ID: 12675224

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
  org.apache.hadoop.hbase.TestZooKeeper
  
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11372//console

This message is automatically generated.

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_master_v1.patch, 
 HBASE-12229_v1.patch, HBASE-12229_v2.patch, HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-16 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173446#comment-14173446
 ] 

Matteo Bertozzi commented on HBASE-12229:
-

Hadoop QA run only against master.
(and even if it is able to run against other versions your file name/patch 
didn't have any info about being 98 only)

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_master_v1.patch, 
 HBASE-12229_v1.patch, HBASE-12229_v2.patch, HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12279) Generated thrift files were generated with the wrong parameters

2014-10-16 Thread Niels Basjes (JIRA)
Niels Basjes created HBASE-12279:


 Summary: Generated thrift files were generated with the wrong 
parameters
 Key: HBASE-12279
 URL: https://issues.apache.org/jira/browse/HBASE-12279
 Project: HBase
  Issue Type: Bug
Reporter: Niels Basjes


It turns out that the java code generated from the thrift files have been 
generated with the wrong settings.
Instead of the documented 
([thrift|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html],
 
[thrift2|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html])
 
{code}
thrift -strict --gen java:hashcode 
{code}
the current files seem to be generated instead with
{code}
thrift -strict --gen java
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12279) Generated thrift files were generated with the wrong parameters

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12279:
-
Attachment: HBASE-12279-2014-10-16-v1.patch

I ran the following commands to get to this patch

{code}
cd hbase-thrift
thrift -strict --gen java:hashcode -out src/main/java/
src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
thrift -strict --gen java:hashcode -out src/main/java/
src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
cd ..
git diff  HBASE-12279-2014-10-16-v1.patch
{code}

 Generated thrift files were generated with the wrong parameters
 ---

 Key: HBASE-12279
 URL: https://issues.apache.org/jira/browse/HBASE-12279
 Project: HBase
  Issue Type: Bug
Reporter: Niels Basjes
 Attachments: HBASE-12279-2014-10-16-v1.patch


 It turns out that the java code generated from the thrift files have been 
 generated with the wrong settings.
 Instead of the documented 
 ([thrift|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html],
  
 [thrift2|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html])
  
 {code}
 thrift -strict --gen java:hashcode 
 {code}
 the current files seem to be generated instead with
 {code}
 thrift -strict --gen java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12279) Generated thrift files were generated with the wrong parameters

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12279:
-
Status: Patch Available  (was: Open)

 Generated thrift files were generated with the wrong parameters
 ---

 Key: HBASE-12279
 URL: https://issues.apache.org/jira/browse/HBASE-12279
 Project: HBase
  Issue Type: Bug
Reporter: Niels Basjes
 Attachments: HBASE-12279-2014-10-16-v1.patch


 It turns out that the java code generated from the thrift files have been 
 generated with the wrong settings.
 Instead of the documented 
 ([thrift|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html],
  
 [thrift2|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html])
  
 {code}
 thrift -strict --gen java:hashcode 
 {code}
 the current files seem to be generated instead with
 {code}
 thrift -strict --gen java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-16 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173520#comment-14173520
 ] 

Niels Basjes commented on HBASE-12269:
--

[~busbey] As you indicated: I created a separate issue to take out the 
differences in the generated files first: HBASE-12279


 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12269-2014-10-15-v1.patch


 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173575#comment-14173575
 ] 

ramkrishna.s.vasudevan commented on HBASE-11870:


+1 on patch.
bq.All other  parts, this Cell refers to the original Cell.
For all other parts this cell referts to the original.  or just say all other 
parts refer to the original cell.

 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Attachment: HBASE-11639_v2.patch

Patch that allows the cells to be replicated as strings to the peers.  The 
control of modified the tags, identifying the tags were modified and removing 
them all lies with the VLS impl.
Uploading for reviews and comments.  Can refine the patch based on it.

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Status: Patch Available  (was: In Progress)

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173594#comment-14173594
 ] 

Hadoop QA commented on HBASE-11639:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675253/HBASE-11639_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12675253

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11374//console

This message is automatically generated.

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12279) Generated thrift files were generated with the wrong parameters

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173597#comment-14173597
 ] 

Hadoop QA commented on HBASE-12279:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675246/HBASE-12279-2014-10-16-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12675246

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+lastComparison = 
Boolean.valueOf(isSetAuthorizations()).compareTo(typedOther.isSetAuthorizations());
+lastComparison = 
Boolean.valueOf(isSetCellVisibility()).compareTo(typedOther.isSetCellVisibility());
+lastComparison = 
Boolean.valueOf(isSetAuthorizations()).compareTo(typedOther.isSetAuthorizations());

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11373//console

This message is automatically generated.

 Generated thrift files were generated with the wrong parameters
 ---

 Key: HBASE-12279
 URL: https://issues.apache.org/jira/browse/HBASE-12279
 Project: HBase
  Issue Type: Bug
Reporter: Niels Basjes
 Attachments: HBASE-12279-2014-10-16-v1.patch


 It turns out that the java code generated from the thrift files have been 
 generated with the wrong settings.
 Instead of the documented 
 ([thrift|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html],
  
 [thrift2|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html])
  
 {code}
 thrift -strict --gen java:hashcode 
 {code}
 the current files seem to be generated instead with
 {code}
 thrift -strict --gen java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-16 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12202:
---
Attachment: HBASE-12202_V2.patch

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202.patch, HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173607#comment-14173607
 ] 

Anoop Sam John commented on HBASE-11870:


Thanks Ram.  Can correct on commit.
Ping [~apurtell]

 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-16 Thread Niels Basjes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173608#comment-14173608
 ] 

Niels Basjes commented on HBASE-12269:
--

I found this page 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html
  (that is still located somewhere under hbase-server/src/javadoc ) that  says:
{quote}
We tried to deprecate this Thrift interface and replace it
with the Interface defined over in the thrift2 package only this package will 
not die.
Folks keep adding to it and fixing it up so its around for another while until 
someone
takes command and drives this package out of existence replacing it w/ an 
Interface that
better matches the hbase API (this package was modelled on old HBase API long 
since dropped).
{quote}

Question: Should I implement this new feature in the old thrift or deliberately 
leave it out to help it die a little faster?


 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12269-2014-10-15-v1.patch


 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-10773) Make use of ByteRanges in HFileBlock instead of ByteBuffers

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-10773:
---
Attachment: HBASE-10772_3.patch

Just attaching a patch for reference with BR was tried to replace BBs.  The 
reason for introducing Immutable and mutable BRs what that in the HfileBlocks 
there was an intention to use blocks in the read only mode.  So using BRs has 
that capability now where Immutable BRs can be used as read only.

 Make use of ByteRanges in HFileBlock instead of ByteBuffers
 ---

 Key: HBASE-10773
 URL: https://issues.apache.org/jira/browse/HBASE-10773
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Attachments: HBASE-10772_3.patch


 Replacing BBs with Byte Ranges  in block cache as part of HBASE-10772, would 
 help in replacing BBs with BRs in HFileBlock also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12161) Add support for grant/revoke on namespaces in AccessControlClient

2014-10-16 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173627#comment-14173627
 ] 

Matteo Bertozzi commented on HBASE-12161:
-

I was testing the patch before committing it, but sometimes fails with:
{noformat}
testAccessControlClientGrantRevokeOnNamespace(org.apache.hadoop.hbase.security.access.TestAccessController)
  Time elapsed: 1.544 sec   FAILURE!
java.lang.AssertionError: Expected action to pass for user 'testNS' but was 
denied
  at org.junit.Assert.fail(Assert.java:88)
  at 
org.apache.hadoop.hbase.security.access.SecureTestUtil.verifyAllowed(SecureTestUtil.java:158)
  at 
org.apache.hadoop.hbase.security.access.SecureTestUtil.verifyAllowed(SecureTestUtil.java:165)
  at 
org.apache.hadoop.hbase.security.access.TestAccessController.testAccessControlClientGrantRevokeOnNamespace(TestAccessController.java:2135)
{noformat}

the problem is that the grant/revoke was not applied yet. a simple 
Thread.sleep() minimize the problem, but maybe you can add a better check 

 Add support for grant/revoke on namespaces in AccessControlClient
 -

 Key: HBASE-12161
 URL: https://issues.apache.org/jira/browse/HBASE-12161
 Project: HBase
  Issue Type: Improvement
Reporter: Srikanth Srungarapu
Assignee: Srikanth Srungarapu
Priority: Minor
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12161_0.98.patch, HBASE-12161_master.patch, 
 HBASE-12161_master_v2.patch, HBASE-12161_v3.patch


 As per the description.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173634#comment-14173634
 ] 

Hadoop QA commented on HBASE-12202:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675256/HBASE-12202_V2.patch
  against trunk revision .
  ATTACHMENT ID: 12675256

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.http.TestHttpServerLifecycle.testStartedServerIsAlive(TestHttpServerLifecycle.java:71)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11375//console

This message is automatically generated.

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202.patch, HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173674#comment-14173674
 ] 

Sean Busbey commented on HBASE-12269:
-

IMHO, if you'd like people to use the addition you should include it in the 
original thrift package. (see this [user@ thread on the 
two|http://mail-archives.apache.org/mod_mbox/hbase-user/201409.mbox/%3C75E9B3D687D9BA43A99B7097A074C5B52977A8E4%40szxeml501-mbx.china.huawei.com%3E])

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12269-2014-10-15-v1.patch


 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2014-10-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173684#comment-14173684
 ] 

Anoop Sam John commented on HBASE-12213:


Yep Nick.  Will make use..  If any addition needed, will do.

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John

 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173689#comment-14173689
 ] 

ramkrishna.s.vasudevan commented on HBASE-12202:


Had a look at the patch.  
- Now doing duplicate and a slice, we are doing more operations but they may 
be minor I suppose.
- ByteBufferUtils.copyFromBufferToBuffer() can we reuse the one above the new 
one. I mean can we refactor both of them.
- In HfileBlock.getBufferReadOnly()
{code}
-return ByteBuffer.wrap(buf.array(), buf.arrayOffset(),
-buf.limit() - totalChecksumBytes()).slice();
+ByteBuffer dup = this.buf.duplicate();
+dup.limit(buf.limit() - totalChecksumBytes());
+return dup.slice();
{code}
This will impact the limit and the capacity of the returning array.  Will that 
be a concern? When we say a read only will it be better to return a real read 
only Buffer?  In case of BR we tried to do achieve that by returning an 
ImmutableBR unless and otherwise stated to return a MutableBR.
{code}
ByteBuffer inDup = this.buf.duplicate();
+  inDup.limit(inDup.limit() + headerSize());
{code}
Why is the limit after considering the headerSize()? 

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202.patch, HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Status: Open  (was: Patch Available)

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Status: Patch Available  (was: Open)

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Attachment: HBASE-11639_v2.patch

All tests passed in the local run except for a test related log roll. Trying an 
updated patch for QA run.

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173697#comment-14173697
 ] 

Hadoop QA commented on HBASE-11639:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675273/HBASE-11639_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12675273

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11376//console

This message is automatically generated.

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12279) Generated thrift files were generated with the wrong parameters

2014-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12279:

Affects Version/s: 0.94.0
   0.98.0
   0.99.0
Fix Version/s: 0.99.2
   0.94.25
   0.98.8
   2.0.0

It's looks like this impacts all branches and has been wrong for some time. 
(~0.92.0)

 Generated thrift files were generated with the wrong parameters
 ---

 Key: HBASE-12279
 URL: https://issues.apache.org/jira/browse/HBASE-12279
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0, 0.98.0, 0.99.0
Reporter: Niels Basjes
 Fix For: 2.0.0, 0.98.8, 0.94.25, 0.99.2

 Attachments: HBASE-12279-2014-10-16-v1.patch


 It turns out that the java code generated from the thrift files have been 
 generated with the wrong settings.
 Instead of the documented 
 ([thrift|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html],
  
 [thrift2|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html])
  
 {code}
 thrift -strict --gen java:hashcode 
 {code}
 the current files seem to be generated instead with
 {code}
 thrift -strict --gen java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12265:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to 0.98+

 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Status: Patch Available  (was: Open)

 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Attachment: HBASE-12272-2014-10-16-v2.patch

This patch adds the code to generate the thrift java classes to the location 
where those files are currently stored.
All documentation I could find on this subject has been updated/extended to 
describe using this feature (i.e. Do this: mvn compile -Dcompile-thrift ).

The intended usage is to let the developer of a thrift change run this and let 
him/her commit the changed files.
If so desired it is easy to change this to run automatically on each build, but 
that will require all developers to have the correct thrift version installed.

NOTE: This patch does NOT contain the actually generated files. This patch has 
been put into HBASE-12279


 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173729#comment-14173729
 ] 

Sean Busbey commented on HBASE-12272:
-

Could you update the instructions to activate the compile-thrift profile 
directly instead of using a property?

Could you add a property for specifying where the thrift compiler is, and have 
it default to the current somewhere on the path?

 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Status: Open  (was: Patch Available)

 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Status: Patch Available  (was: Open)

 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Attachment: HBASE-12272-2014-10-16-v3.patch

[~busbey] Implemented the requested changes.

 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12272:

Component/s: documentation

 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generating Thrift code automatically

2014-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12272:

Component/s: build

 Generating Thrift code automatically
 

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12272:

Summary: Generate Thrift code through maven  (was: Generating Thrift code 
automatically)

{quote}
+  activation
+property
+  namecompile-thrift/name
+/property
+  /activation
{quote}

If we're keeping this manual, remove the activation section. We can add it in 
again should we switch to build if you can.

{quote}
+  You can use maven profile  codecompile-thrift/code to do 
this (by setting the compile-thrift property)./para
{quote}

remove the parenthetical since we're directly invoking now.

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173756#comment-14173756
 ] 

Anoop Sam John commented on HBASE-12202:


No extra overhead. Previously we had wrap() call and then slice. This will 
create 2 new objects but no data copy. In patch also same way.
The existing will move the position of the destination buffer and is being used 
already.  This is different in which both src and destination ops are absolute 
and no pos changes. IMO better can have 2 methods. Not many lines of codes.. 
Than passing a boolean and based on that do op, better 2 APIs

getBufferReadOnly - the ops will be same as the old way. We want to limit the 
new buffer to exclude the checksum bytes.  
bq.return a real read only Buffer? 
But we need new limit. So no new BB object is needed.

bq.Why is the limit after considering the headerSize()?
We increase the limit here because we have to copy the header data of the next 
block.  In case of array based copying this was ok. But now it is buffer to 
buffer copy. We can not read/write after limit.  I just wrote comments above 
this code.

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202.patch, HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11192) HBase ClusterId File Empty Check Loggic

2014-10-16 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi resolved HBASE-11192.
---
Resolution: Duplicate

Duplicate of HBASE-11191

 HBase ClusterId File Empty Check Loggic
 ---

 Key: HBASE-11192
 URL: https://issues.apache.org/jira/browse/HBASE-11192
 Project: HBase
  Issue Type: Bug
 Environment: HBase 0.94+Hadoop2.2.0+Zookeeper3.4.5
Reporter: sunjingtao

 if the clusterid file exists but empty ,then the following check logic in the 
 MasterFileSystem.java has none effects.
 if (!FSUtils.checkClusterIdExists(fs, rd, c.getInt(
 HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000))) {
   FSUtils.setClusterId(fs, rd, UUID.randomUUID().toString(), c.getInt(
   HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000));
 }
 clusterId = FSUtils.getClusterId(fs, rd);
 because the checkClusterIdExists method only check the path .
 Path filePath = new Path(rootdir, HConstants.CLUSTER_ID_FILE_NAME);
 return fs.exists(filePath);
 in my case ,the file exists but is empty,so the readed clusterid is null 
 which cause a nullPointerException:
 java.lang.NullPointerException
   at org.apache.hadoop.hbase.util.Bytes.toBytes(Bytes.java:441)
   at 
 org.apache.hadoop.hbase.zookeeper.ClusterId.setClusterId(ClusterId.java:72)
   at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:581)
   at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:433)
   at java.lang.Thread.run(Thread.java:745)
 is this a bug?please make sure!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173825#comment-14173825
 ] 

Hudson commented on HBASE-12265:


FAILURE: Integrated in HBase-1.0 #322 (See 
[https://builds.apache.org/job/HBase-1.0/322/])
HBASE-12265 HBase shell 'show_filters' points to internal Facebook URL 
(apurtell: rev 9debfcfaf6a854e82fb6b29c141d79dc2b658250)
* hbase-shell/src/main/ruby/shell/commands/show_filters.rb


 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173838#comment-14173838
 ] 

Hadoop QA commented on HBASE-12272:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675276/HBASE-12272-2014-10-16-v2.patch
  against trunk revision .
  ATTACHMENT ID: 12675276

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
argument${basedir}/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift/argument
+
argument${basedir}/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift/argument
+  You can use maven profile  codecompile-thrift/code to do 
this (by setting the compile-thrift property)./para

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11377//console

This message is automatically generated.

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173848#comment-14173848
 ] 

Hudson commented on HBASE-12265:


SUCCESS: Integrated in HBase-0.98 #606 (See 
[https://builds.apache.org/job/HBase-0.98/606/])
HBASE-12265 HBase shell 'show_filters' points to internal Facebook URL 
(apurtell: rev 82fb65819930920059ee3375e3af6d2c67e71d54)
* hbase-shell/src/main/ruby/shell/commands/show_filters.rb


 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173851#comment-14173851
 ] 

Hudson commented on HBASE-12265:


SUCCESS: Integrated in HBase-TRUNK #5667 (See 
[https://builds.apache.org/job/HBase-TRUNK/5667/])
HBASE-12265 HBase shell 'show_filters' points to internal Facebook URL 
(apurtell: rev 963bd07f681bdbd5bf8385b8eb082dd6a4f57556)
* hbase-shell/src/main/ruby/shell/commands/show_filters.rb


 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173853#comment-14173853
 ] 

Hadoop QA commented on HBASE-12272:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675280/HBASE-12272-2014-10-16-v3.patch
  against trunk revision .
  ATTACHMENT ID: 12675280

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
argument${basedir}/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift/argument
+
argument${basedir}/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift/argument
+  You can use maven profile  codecompile-thrift/code to do 
this (by setting the compile-thrift property)./para

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11378//console

This message is automatically generated.

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12265) HBase shell 'show_filters' points to internal Facebook URL

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173879#comment-14173879
 ] 

Hudson commented on HBASE-12265:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #576 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/576/])
HBASE-12265 HBase shell 'show_filters' points to internal Facebook URL 
(apurtell: rev 82fb65819930920059ee3375e3af6d2c67e71d54)
* hbase-shell/src/main/ruby/shell/commands/show_filters.rb


 HBase shell 'show_filters' points to internal Facebook URL
 --

 Key: HBASE-12265
 URL: https://issues.apache.org/jira/browse/HBASE-12265
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Niels Basjes
Assignee: Andrew Purtell
Priority: Trivial
  Labels: documentation, help
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-12265.patch


 In the HBase shell the output of the show_filters command starts with this:
 {code}
 hbase(main):001:0 show_filters
 Documentation on filters mentioned below can be found at: 
 https://our.intern.facebook.com/intern/wiki/index.php/HBase/Filter_Language
 ColumnPrefixFilter
 TimestampsFilter
 ...
 {code}
 This documentation link cannot be reached by me. It seems to be an internal 
 link to a wiki that can only be reached by Facebook employees.
 This link is in this file in two places 
 hbase-shell/src/main/ruby/shell/commands/show_filters.rb
 So far I have not been able to find the 'right' page to point to (I did a 
 quick check of the apache wiki and the hbase book).
 So either remove the link or add a section to the hbase book and point there. 
 I think the latter (creating documentation) is the best solution.
 Perhaps Facebook is willing to donate the apparently existing pages?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12274:
---
Status: Patch Available  (was: Open)

 Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() 
 may produce null pointer exception
 --

 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12274-region-server.log, 12274-v2.txt, 12274-v2.txt, 
 12274-v3.txt


 I saw the following in region server log:
 {code}
 2014-10-15 03:28:36,976 ERROR 
 [B.DefaultRpcServer.handler=0,queue=0,port=60020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 This is where the NPE happened:
 {code}
 // Let's see what we have in the storeHeap.
 KeyValue current = this.storeHeap.peek();
 {code}
 The cause was race between nextInternal(called through nextRaw) and close 
 methods.
 nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue break replication

2014-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12241:
---
Status: Patch Available  (was: Open)

 The crash of regionServer when taking deadserver's replication queue break 
 replication
 --

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12241:
---
Summary: The crash of regionServer when taking deadserver's replication 
queue breaks replication  (was: The crash of regionServer when taking 
deadserver's replication queue break replication)

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12279) Generated thrift files were generated with the wrong parameters

2014-10-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173940#comment-14173940
 ] 

Ted Yu commented on HBASE-12279:


+1

 Generated thrift files were generated with the wrong parameters
 ---

 Key: HBASE-12279
 URL: https://issues.apache.org/jira/browse/HBASE-12279
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0, 0.98.0, 0.99.0
Reporter: Niels Basjes
 Fix For: 2.0.0, 0.98.8, 0.94.25, 0.99.2

 Attachments: HBASE-12279-2014-10-16-v1.patch


 It turns out that the java code generated from the thrift files have been 
 generated with the wrong settings.
 Instead of the documented 
 ([thrift|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift/package-summary.html],
  
 [thrift2|http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/thrift2/package-summary.html])
  
 {code}
 thrift -strict --gen java:hashcode 
 {code}
 the current files seem to be generated instead with
 {code}
 thrift -strict --gen java
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10937) How to run CsvBulkLoadTool of Phoenix 4.0

2014-10-16 Thread Arijit Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173958#comment-14173958
 ] 

Arijit Banerjee commented on HBASE-10937:
-

Having to set HADOOP_CLASSPATH variable creates additional problem when running 
the job from Oozie. Currently I don't see support to set an environment 
variable in Oozie Java action.

 How to run CsvBulkLoadTool of Phoenix 4.0
 -

 Key: HBASE-10937
 URL: https://issues.apache.org/jira/browse/HBASE-10937
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong

 There is known issue to run MR job for hbase0.96+ version. Details at section 
 “Notice to Mapreduce users of HBase 0.96.1 and above” 
 https://hbase.apache.org/book.html
 Basically we need to put hbase-protocol*.jar before hadoop loads 
 protobuf-java jar.  I updated our documentation on 
 http://phoenix.incubator.apache.org/bulk_dataload.html on how to use 
 CsvBulkLoadTool for Phoenix 4.0 as following:
 {noformat}
 HADOOP_CLASSPATH=$(hbase mapredcp)::/path/to/hbase/conf hadoop jar 
 phoenix-4.0.0-incubating-client.jar 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool --table EXAMPLE --input 
 /data/example.csv
 {noformat}
 OR
 {noformat}
 HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar 
 phoenix-4.0.0-incubating-client.jar 
 org.apache.phoenix.mapreduce.CsvBulkLoadTool --table EXAMPLE --input 
 /data/example.csv
 {noformat} 
 Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Status: Open  (was: Patch Available)

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Status: Patch Available  (was: Open)

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch, 
 HBASE-11639_v3.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-11639:
---
Attachment: HBASE-11639_v3.patch

Trying to get a QA run.

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch, 
 HBASE-11639_v3.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173976#comment-14173976
 ] 

ramkrishna.s.vasudevan commented on HBASE-11639:


https://reviews.apache.org/r/26814 - RB link.

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch, 
 HBASE-11639_v3.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174034#comment-14174034
 ] 

ramkrishna.s.vasudevan commented on HBASE-12202:


bq.We increase the limit here because we have to copy the header data of the 
next block
This is limit right?  So it is fine. 
When I meant read only buffer I meant should we have way to return a read only 
byte buffer?

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202.patch, HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Status: Open  (was: Patch Available)

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Attachment: HBASE-12272-2014-10-16-v4.patch

Changes compared to v3:
- Removed property activation (pom.xml and documentation)
- Missed a few spots where ${thrift.path} should be used.

Question: This still leaves the line too long ( 100 chars) in the pom.xml 
caused by the very long path where the thrift files are located. What is the 
desired way to handle this? Or is this an acceptable case?


 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch, 
 HBASE-12272-2014-10-16-v4.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12272:
-
Status: Patch Available  (was: Open)

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch, 
 HBASE-12272-2014-10-16-v4.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174063#comment-14174063
 ] 

Sean Busbey commented on HBASE-12272:
-

IMO the long line is fine in this case.

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch, 
 HBASE-12272-2014-10-16-v4.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174074#comment-14174074
 ] 

Hadoop QA commented on HBASE-12241:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12674739/HBASE-12241-trunk-v1.diff
  against trunk revision .
  ATTACHMENT ID: 12674739

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11380//console

This message is automatically generated.

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
  

[jira] [Updated] (HBASE-12269) Add support for Scan.setRowPrefixFilter to thrift

2014-10-16 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated HBASE-12269:
-
Attachment: HBASE-12269-2014-10-16-v2-INCOMPLETE.patch

The patch for all the files I edited, not the thrift generated files.
Creating the final patch has to wait until HBASE-12279 has been committed.

 Add support for Scan.setRowPrefixFilter to thrift
 -

 Key: HBASE-12269
 URL: https://issues.apache.org/jira/browse/HBASE-12269
 Project: HBase
  Issue Type: New Feature
  Components: Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12269-2014-10-15-v1.patch, 
 HBASE-12269-2014-10-16-v2-INCOMPLETE.patch


 I think having the feature introduced in HBASE-11990 in the hbase thrift 
 interface would be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12280) [89-fb] Make the number of blocking store files online configurable

2014-10-16 Thread Jack Langman (JIRA)
Jack Langman created HBASE-12280:


 Summary: [89-fb] Make the number of blocking store files online 
configurable
 Key: HBASE-12280
 URL: https://issues.apache.org/jira/browse/HBASE-12280
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.89-fb
Reporter: Jack Langman


This change allows us to change the number of blocking store files on-line. 
This is already done, and should appear in the 89-fb branch soon. For context, 
see: HBASE-8544, HBASE-8576, HBASE-8805



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11639) [Visibility controller] Replicate the visibility of Cells as strings

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174096#comment-14174096
 ] 

Hadoop QA commented on HBASE-11639:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675310/HBASE-11639_v3.patch
  against trunk revision .
  ATTACHMENT ID: 12675310

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 5 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+User.createUserForTesting(conf, User.getCurrent().getShortName(), new 
String[] { supergroup });
+PrivilegedExceptionActionVisibilityLabelsResponse action = new 
PrivilegedExceptionActionVisibilityLabelsResponse() {
+PrivilegedExceptionActionVisibilityLabelsResponse action = new 
PrivilegedExceptionActionVisibilityLabelsResponse() {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11381//console

This message is automatically generated.

 [Visibility controller] Replicate the visibility of Cells as strings
 

 Key: HBASE-11639
 URL: https://issues.apache.org/jira/browse/HBASE-11639
 Project: HBase
  Issue Type: Improvement
  Components: Replication, security
Affects Versions: 0.98.4
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
  Labels: VisibilityLabels
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: HBASE-11639_v2.patch, HBASE-11639_v2.patch, 
 HBASE-11639_v3.patch


 This issue is aimed at persisting the visibility labels as strings in the WAL 
 rather than Label ordinals.  This would help in replicating the label 
 ordinals to the replication cluster as strings directly and also that after 
 HBASE-11553 would help because the replication cluster could have an 
 implementation as string based visibility labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9157) ZKUtil.blockUntilAvailable loops forever with non-recoverable errors

2014-10-16 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174150#comment-14174150
 ] 

Jeffrey Zhong commented on HBASE-9157:
--

0.94 doesn't have this issue. 

 ZKUtil.blockUntilAvailable loops forever with non-recoverable errors
 

 Key: HBASE-9157
 URL: https://issues.apache.org/jira/browse/HBASE-9157
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
Priority: Minor
 Fix For: 0.98.8, 0.99.2

 Attachments: hbase-9157-v2.patch, hbase-9157.patch


 In one of integration test, I observed that a thread keeps spinning error 
 logs Unexpected exception handling blockUntilAvailable due to 
 KeeperException.ConnectionLossException. Below is the related code:
 {code}
 while (!finished) {
   try {
 data = ZKUtil.getData(zkw, znode);
   } catch(KeeperException e) {
 LOG.warn(Unexpected exception handling blockUntilAvailable, e);
   }
   if (data == null  (System.currentTimeMillis() +
 HConstants.SOCKET_RETRY_WAIT_MS  endTime)) {
 Thread.sleep(HConstants.SOCKET_RETRY_WAIT_MS);
   } else {
 finished = true;
   }
 }
 {code}
 ConnectionLossException might be recoverable but SessionExpiredException and 
 AuthFailed are not recoverable errors, the while loop can't break.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174185#comment-14174185
 ] 

Hadoop QA commented on HBASE-12272:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675320/HBASE-12272-2014-10-16-v4.patch
  against trunk revision .
  ATTACHMENT ID: 12675320

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
argument${basedir}/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift/argument
+
argument${basedir}/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift/argument

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestImportExport
  org.apache.hadoop.hbase.util.TestProcessBasedCluster

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.camel.component.jpa.JpaWithNamedQueryTest.testProducerInsertsIntoDatabaseThenConsumerFiresMessageExchange(JpaWithNamedQueryTest.java:112)
at 
org.apache.camel.component.jpa.JpaWithNamedQueryTest.testProducerInsertsIntoDatabaseThenConsumerFiresMessageExchange(JpaWithNamedQueryTest.java:112)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11382//console

This message is automatically generated.

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch, 
 HBASE-12272-2014-10-16-v4.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12272) Generate Thrift code through maven

2014-10-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174196#comment-14174196
 ] 

Enis Soztutar commented on HBASE-12272:
---

This should be good. Similar to -Pcompile-protobuf. 

 Generate Thrift code through maven
 --

 Key: HBASE-12272
 URL: https://issues.apache.org/jira/browse/HBASE-12272
 Project: HBase
  Issue Type: Improvement
  Components: build, documentation, Thrift
Reporter: Niels Basjes
 Attachments: HBASE-12272-2014-10-15-v1-PREVIEW.patch, 
 HBASE-12272-2014-10-16-v2.patch, HBASE-12272-2014-10-16-v3.patch, 
 HBASE-12272-2014-10-16-v4.patch


 The generated thrift code is currently under source control.
 This should be generated automatically during the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12236) Change namespace of HTraceConfiguration dependency in 0.98

2014-10-16 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174233#comment-14174233
 ] 

Masatake Iwasaki commented on HBASE-12236:
--

The problem is that hbase-0.98 is built against htrace-2.04 which is old. I 
think it is good to bump the version of htrace from 2.04 to 3.0.4 as attached 
patch do though bumping the version of htrace is incompatible change in the 
sense that user need to change the value of hbase.trace.spanreceiver.classes 
from org.cloudera.htrace.impl.* to org.htrace.impl.*.

htrace-zipkin-2.04 and htrace-zipkin-3.0.4 are compatible in the sense that
* htrace-2.04 and htrace-3.0.4 has same data structure for tracing span
* htrace-zipkin-2.04 and htrace-zipkin-3.0.4 uses same Zipkin client API

Even if the version of htrace is bumped, htrace-hbase still does not work with 
hbase-0.98 because it depends on o.a.h.http.HttpServer2 which does not exist in 
hadoop-2.2.0 but this is another issue.


 Change namespace of HTraceConfiguration dependency in 0.98
 --

 Key: HBASE-12236
 URL: https://issues.apache.org/jira/browse/HBASE-12236
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 12236-v1.txt


 As discussed in thread 'NoSuchMethodError using zipkin with hbase 0.98.5', 
 HBaseSpanReceiver.config() method from htrace-hbase module expects parameter 
 of type org.htrace.HTraceConfiguration.
 However, org.apache.hadoop.hbase.trace.HBaseHTraceConfiguration in 0.98 
 extends org.cloudera.htrace.HTraceConfiguration , leading to the following 
 compilation error when building htrace-hbase against 0.98:
 {code}
 [ERROR]
 /home/hadoop/git/htrace/htrace-hbase/src/main/java/org/htrace/impl/HBaseSpanReceiver.java:[341,12]
 error: method configure in class HBaseSpanReceiver cannot be applied to
 given types;
 {code}
 Thanks to Abhishek Kumar who reported the above issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12236) Change namespace of HTraceConfiguration dependency in 0.98

2014-10-16 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174240#comment-14174240
 ] 

Masatake Iwasaki commented on HBASE-12236:
--

I'm testing the patch and found that 
hbase-shell/src/main/ruby/shell/commands/trace.rb imports 
org.cloudera.htrace.*. This should be fixed too.

 Change namespace of HTraceConfiguration dependency in 0.98
 --

 Key: HBASE-12236
 URL: https://issues.apache.org/jira/browse/HBASE-12236
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 12236-v1.txt


 As discussed in thread 'NoSuchMethodError using zipkin with hbase 0.98.5', 
 HBaseSpanReceiver.config() method from htrace-hbase module expects parameter 
 of type org.htrace.HTraceConfiguration.
 However, org.apache.hadoop.hbase.trace.HBaseHTraceConfiguration in 0.98 
 extends org.cloudera.htrace.HTraceConfiguration , leading to the following 
 compilation error when building htrace-hbase against 0.98:
 {code}
 [ERROR]
 /home/hadoop/git/htrace/htrace-hbase/src/main/java/org/htrace/impl/HBaseSpanReceiver.java:[341,12]
 error: method configure in class HBaseSpanReceiver cannot be applied to
 given types;
 {code}
 Thanks to Abhishek Kumar who reported the above issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174284#comment-14174284
 ] 

Enis Soztutar commented on HBASE-12241:
---

We also turn on useMulti by default for some time now. +1 for the branch-1 
change. It will help with stability and less configuration to worry. 

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12241:
--
Hadoop Flags: Incompatible change

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174287#comment-14174287
 ] 

Enis Soztutar commented on HBASE-12241:
---

This should come with doc changes about the min zookeeper version of 3.4 
though. 

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12241:
---
  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

Integrated to branch-1 and master

Thanks for the patch, Shaohui.

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reopened HBASE-12241:


[~lshmouse]:
Can you attach a doc patch ?

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10856) Prep for 1.0

2014-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174368#comment-14174368
 ] 

stack commented on HBASE-10856:
---

DOCUMENTATION: New minimum zk required and useMulti on by default.  See 
HBASE-12241

 Prep for 1.0
 

 Key: HBASE-10856
 URL: https://issues.apache.org/jira/browse/HBASE-10856
 Project: HBase
  Issue Type: Umbrella
Reporter: stack
 Fix For: 0.99.2


 Tasks for 1.0 copied here from our '1.0.0' mailing list discussion.  Idea is 
 to file subtasks off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174362#comment-14174362
 ] 

stack commented on HBASE-12241:
---

[~liushaohui] Don't do a doc patch. There is no place for you to insert such a 
thing currently.  We'll do it as part of the upgrade notes for 1.0 (I added a  
TODO on HBASE-10856)  Thanks.

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12241:
--
Release Note: This fix includes our enabling useMulti flag as default. 
multi is a zk method only available in later versions of zookeeper.  This 
change means HBase 1.0 requires a zookeeper that is at least version 3.4+.  See 
HBASE-6775 for background.

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-12241.

Resolution: Fixed

 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10856) Prep for 1.0

2014-10-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174383#comment-14174383
 ] 

Enis Soztutar commented on HBASE-10856:
---

bq. DOCUMENTATION: New minimum zk required and useMulti on by default. See 
HBASE-12241
We can add a section for zookeeper similar to hadoop versions and jdk versions 
I guess. 

 Prep for 1.0
 

 Key: HBASE-10856
 URL: https://issues.apache.org/jira/browse/HBASE-10856
 Project: HBase
  Issue Type: Umbrella
Reporter: stack
 Fix For: 0.99.2


 Tasks for 1.0 copied here from our '1.0.0' mailing list discussion.  Idea is 
 to file subtasks off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-16 Thread Madhan Neethiraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Neethiraj updated HBASE-12277:
-
Status: Open  (was: Patch Available)

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 0001-HBASE-12277-Refactored-bulk-load-methods-from-Access.patch, 
 0002-HBASE-12277-License-text-added-to-the-newly-created-.patch, 
 HBASE-12277.patch


 SecureBulkLoadEndPoint references couple of methods, prePrepareBulkLoad() and 
 preCleanupBulkLoad(), implemented in AccessController i.e. direct coupling 
 between AccessController and SecureBuikLoadEndPoint classes.
 SecureBulkLoadEndPoint assumes presence of AccessController in 
 secure-cluster. If HBase is configured with another coprocessor for 
 access-control, SecureBulkLoadEndPoint fails with NPE.
 To remove this direct coupling, bulk-load related methods in AccessController 
 should be refactored to an interface; and have AccessController implement 
 this interfaces. SecureBulkLoadEndPoint should then look for coprocessors 
 that implement this interface, instead of directly looking for 
 AccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-16 Thread Madhan Neethiraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Neethiraj updated HBASE-12277:
-
Attachment: HBASE-12277.patch

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 0001-HBASE-12277-Refactored-bulk-load-methods-from-Access.patch, 
 0002-HBASE-12277-License-text-added-to-the-newly-created-.patch, 
 HBASE-12277.patch


 SecureBulkLoadEndPoint references couple of methods, prePrepareBulkLoad() and 
 preCleanupBulkLoad(), implemented in AccessController i.e. direct coupling 
 between AccessController and SecureBuikLoadEndPoint classes.
 SecureBulkLoadEndPoint assumes presence of AccessController in 
 secure-cluster. If HBase is configured with another coprocessor for 
 access-control, SecureBulkLoadEndPoint fails with NPE.
 To remove this direct coupling, bulk-load related methods in AccessController 
 should be refactored to an interface; and have AccessController implement 
 this interfaces. SecureBulkLoadEndPoint should then look for coprocessors 
 that implement this interface, instead of directly looking for 
 AccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-16 Thread Madhan Neethiraj (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Madhan Neethiraj updated HBASE-12277:
-
Status: Patch Available  (was: Open)

Submitting patch again, as the earlier one failed to apply in Hadoop QA.

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 0001-HBASE-12277-Refactored-bulk-load-methods-from-Access.patch, 
 0002-HBASE-12277-License-text-added-to-the-newly-created-.patch, 
 HBASE-12277.patch


 SecureBulkLoadEndPoint references couple of methods, prePrepareBulkLoad() and 
 preCleanupBulkLoad(), implemented in AccessController i.e. direct coupling 
 between AccessController and SecureBuikLoadEndPoint classes.
 SecureBulkLoadEndPoint assumes presence of AccessController in 
 secure-cluster. If HBase is configured with another coprocessor for 
 access-control, SecureBulkLoadEndPoint fails with NPE.
 To remove this direct coupling, bulk-load related methods in AccessController 
 should be refactored to an interface; and have AccessController implement 
 this interfaces. SecureBulkLoadEndPoint should then look for coprocessors 
 that implement this interface, instead of directly looking for 
 AccessController.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12075) Preemptive Fast Fail

2014-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174388#comment-14174388
 ] 

stack commented on HBASE-12075:
---

bq.  It was very subtle. 

Sorry about that [~manukranthk] We should deprecate and do a new mocked 
connection using the new Connection Interface...

Looking at the patch:

Why 'new' in getNewRpcRetryingCallerFactory?  Why not just 
getRpcRetryingCallerFactory?

ClusterConnection is internal?  Not exposed?

We owe you a more thorough review (I am out of time at mo... )  Will be back.  
This is a load of new code.  Others interested in client interaction should 
take a look too.



 Preemptive Fast Fail
 

 Key: HBASE-12075
 URL: https://issues.apache.org/jira/browse/HBASE-12075
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Attachments: 0001-Add-a-test-case-for-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch


 In multi threaded clients, we use a feature developed on 0.89-fb branch 
 called Preemptive Fast Fail. This allows the client threads which would 
 potentially fail, fail fast. The idea behind this feature is that we allow, 
 among the hundreds of client threads, one thread to try and establish 
 connection with the regionserver and if that succeeds, we mark it as a live 
 node again. Meanwhile, other threads which are trying to establish connection 
 to the same server would ideally go into the timeouts which is effectively 
 unfruitful. We can in those cases return appropriate exceptions to those 
 clients instead of letting them retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174389#comment-14174389
 ] 

Hudson commented on HBASE-12241:


FAILURE: Integrated in HBase-1.0 #323 (See 
[https://builds.apache.org/job/HBase-1.0/323/])
HBASE-12241 The crash of regionServer when taking deadserver's replication 
queue breaks replication (Shaohui) (tedyu: rev 
5b3f6fb1a70133918f6b982b538b4a910aeb5633)
* hbase-common/src/main/resources/hbase-default.xml


 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12075) Preemptive Fast Fail

2014-10-16 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174394#comment-14174394
 ] 

Manukranth Kolloju commented on HBASE-12075:


New indicates the that this is a builder methods that provides new objects of 
the type RpcRetryingCallerFactory. 
ClusterConnection is internal yes. But its still an interface. I am not sure 
what your question specifically is.

 Preemptive Fast Fail
 

 Key: HBASE-12075
 URL: https://issues.apache.org/jira/browse/HBASE-12075
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Attachments: 0001-Add-a-test-case-for-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch


 In multi threaded clients, we use a feature developed on 0.89-fb branch 
 called Preemptive Fast Fail. This allows the client threads which would 
 potentially fail, fail fast. The idea behind this feature is that we allow, 
 among the hundreds of client threads, one thread to try and establish 
 connection with the regionserver and if that succeeds, we mark it as a live 
 node again. Meanwhile, other threads which are trying to establish connection 
 to the same server would ideally go into the timeouts which is effectively 
 unfruitful. We can in those cases return appropriate exceptions to those 
 clients instead of letting them retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store

2014-10-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12148:
--
Attachment: 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch

 Remove TimeRangeTracker as point of contention when many threads writing a 
 Store
 

 Key: HBASE-12148
 URL: https://issues.apache.org/jira/browse/HBASE-12148
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Affects Versions: 2.0.0, 0.99.1
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 
 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 
 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, Screen 
 Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12148) Remove TimeRangeTracker as point of contention when many threads writing a Store

2014-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174426#comment-14174426
 ] 

stack commented on HBASE-12148:
---

New patch.

Remove synchronize in TimeRangeTracker and use AtomicLongs instead.
Use the AtomicUtil updateMin and updateMax doing updating.
Behavior changes slightly in that min and max are no longer tied
by synchronization so better performance.

Will be back later with data on running this patch in context to ensure it 
removes TRT as point of contention.


 Remove TimeRangeTracker as point of contention when many threads writing a 
 Store
 

 Key: HBASE-12148
 URL: https://issues.apache.org/jira/browse/HBASE-12148
 Project: HBase
  Issue Type: Sub-task
  Components: Performance
Affects Versions: 2.0.0, 0.99.1
Reporter: stack
Assignee: stack
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 
 0001-In-AtomicUtils-change-updateMin-and-updateMax-to-ret.patch, 
 12148.addendum.txt, 12148.txt, 12148.txt, 12148v2.txt, 12148v2.txt, Screen 
 Shot 2014-10-01 at 3.39.46 PM.png, Screen Shot 2014-10-01 at 3.41.07 PM.png






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12192) Remove EventHandlerListener

2014-10-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12192:
--
Assignee: ryan rawson
  Status: Patch Available  (was: Open)

Submitting... patch lgtm.

 Remove EventHandlerListener
 ---

 Key: HBASE-12192
 URL: https://issues.apache.org/jira/browse/HBASE-12192
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: ryan rawson
Assignee: ryan rawson
 Attachments: HBASE-12192.txt


 EventHandlerListener isn't actually being used by internal HBase code right 
 now.  No one actually calls 'ExecutorService.registerListener()' according to 
 IntelliJ.
 It might be possible that some coprocessors use it. Perhaps people can 
 comment if they find this functionality useful or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12241) The crash of regionServer when taking deadserver's replication queue breaks replication

2014-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174438#comment-14174438
 ] 

Hudson commented on HBASE-12241:


SUCCESS: Integrated in HBase-TRUNK #5668 (See 
[https://builds.apache.org/job/HBase-TRUNK/5668/])
HBASE-12241 The crash of regionServer when taking deadserver's replication 
queue breaks replication (Shaohui) (tedyu: rev 
7c87f9c6b58d628d453d3a74ddda18106543cdbf)
* hbase-common/src/main/resources/hbase-default.xml


 The crash of regionServer when taking deadserver's replication queue breaks 
 replication
 ---

 Key: HBASE-12241
 URL: https://issues.apache.org/jira/browse/HBASE-12241
 Project: HBase
  Issue Type: Bug
  Components: Replication
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Critical
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12241-trunk-v1.diff


 When a regionserver crash, another regionserver will try to take over the 
 replication hlogs queue and help the the the dead regionserver to finish the 
 replcation.See NodeFailoverWorker in ReplicationSourceManager
 Currently hbase.zookeeper.useMulti is false in default configuration. The 
 operation of taking over replication queue is not atomic. The 
 ReplicationSourceManager firstly lock the replication node of dead 
 regionserver and then copy the replication queue, and delete replication node 
 of dead regionserver at last. The operation of the lockOtherRS just creates a 
 persistent zk node named lock which prevent other regionserver taking over 
 the replication queue.
 See:
 {code}
   public boolean lockOtherRS(String znode) {
 try {
   String parent = ZKUtil.joinZNode(this.rsZNode, znode);
   if (parent.equals(rsServerNameZnode)) {
 LOG.warn(Won't lock because this is us, we're dead!);
 return false;
   }
   String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
   ZKUtil.createAndWatch(this.zookeeper, p, 
 Bytes.toBytes(rsServerNameZnode));
 } catch (KeeperException e) {
   ...
   return false;
 }
 return true;
   }
 {code}
 But if a regionserver crashed after creating this lock zk node and before 
 coping the replication queue to its replication queue, the lock zk node 
 will be left forever and
 no other regionserver can take over the replication queue.
 In out production cluster, we encounter this problem. We found the 
 replication queue was there and no regionserver took over it and a lock zk 
 node left there.
 {quote}
 hbase.32561.log:2014-09-24,14:09:28,790 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st09.bj,12610,1410937824255/lock
 hbase.32561.log:2014-09-24,14:14:45,148 INFO 
 org.apache.hadoop.hbase.replication.ReplicationZookeeper: Won't transfer the 
 queue, another RS took care of it because of: KeeperErrorCode = NoNode for 
 /hbase/hhsrv-micloud/replication/rs/hh-hadoop-srv-st10.bj,12600,1410937795685/lock
 {quote}
 A quick solution is that the lock operation just create an ephemeral lock 
 zookeeper node and when the lock node is deleted, other regionserver will be 
 notified to check if there are replication queue left.
 Suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12250) Adding an endpoint for updating the regionserver config

2014-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174439#comment-14174439
 ] 

stack commented on HBASE-12250:
---

+1 on patch. Very nice.  Retrying hadoopqa to be sure.

 Adding an endpoint for updating the regionserver config
 ---

 Key: HBASE-12250
 URL: https://issues.apache.org/jira/browse/HBASE-12250
 Project: HBase
  Issue Type: Task
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
Priority: Minor
 Fix For: 2.0.0

 Attachments: 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 This is a follow up Jira that adds the end point for updating the 
 configuration on the regionserver. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12250) Adding an endpoint for updating the regionserver config

2014-10-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12250:
--
Attachment: 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch

Retry.

 Adding an endpoint for updating the regionserver config
 ---

 Key: HBASE-12250
 URL: https://issues.apache.org/jira/browse/HBASE-12250
 Project: HBase
  Issue Type: Task
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
Priority: Minor
 Fix For: 2.0.0

 Attachments: 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch, 
 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 This is a follow up Jira that adds the end point for updating the 
 configuration on the regionserver. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12229:
--
Attachment: HBASE-12229_master_v1.patch

Retry

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_master_v1.patch, 
 HBASE-12229_master_v1.patch, HBASE-12229_v1.patch, HBASE-12229_v2.patch, 
 HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12250) Adding an endpoint for updating the regionserver config

2014-10-16 Thread Manukranth Kolloju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174484#comment-14174484
 ] 

Manukranth Kolloju commented on HBASE-12250:


Seems like the lines that are more than 100 characters are from the auto 
generated files. 

 Adding an endpoint for updating the regionserver config
 ---

 Key: HBASE-12250
 URL: https://issues.apache.org/jira/browse/HBASE-12250
 Project: HBase
  Issue Type: Task
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
Priority: Minor
 Fix For: 2.0.0

 Attachments: 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch, 
 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 This is a follow up Jira that adds the end point for updating the 
 configuration on the regionserver. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12277) Refactor bulkLoad methods in AccessController to its own interface

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174527#comment-14174527
 ] 

Hadoop QA commented on HBASE-12277:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675374/HBASE-12277.patch
  against trunk revision .
  ATTACHMENT ID: 12675374

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 3 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+PrepareBulkLoadRequest
request) throws IOException;
+CleanupBulkLoadRequest
request) throws IOException;
+ PrepareBulkLoadRequest
request) throws IOException {
+ CleanupBulkLoadRequest
request) throws IOException {
+ObserverContextRegionCoprocessorEnvironment ctx = new 
ObserverContextRegionCoprocessorEnvironment();
+ObserverContextRegionCoprocessorEnvironment ctx = new 
ObserverContextRegionCoprocessorEnvironment();
+ListBulkLoadObserver coprocessorList = 
this.env.getRegion().getCoprocessorHost().findCoprocessors(BulkLoadObserver.class);
+CoprocessorHost masterCpHost = 
TEST_UTIL.getHBaseCluster().getMaster().getMasterCoprocessorHost();
+assertEquals(masterObservers.get(0).getClass().getSimpleName(), 
masterCoprocessor.getSimpleName());
+for (HRegion region : 
TEST_UTIL.getHBaseCluster().getRegionServer(0).getOnlineRegionsLocalContext()) {

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.coprocessor.TestClassLoading

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11383//console

This message is automatically generated.

 Refactor bulkLoad methods in AccessController to its own interface
 --

 Key: HBASE-12277
 URL: https://issues.apache.org/jira/browse/HBASE-12277
 Project: HBase
  Issue Type: Bug
Reporter: Madhan Neethiraj
 Attachments: 
 

[jira] [Updated] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-16 Thread Dima Spivak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dima Spivak updated HBASE-12229:

Attachment: HBASE-12229_master_v2.patch

Legit test failure caused by me being dumb. :) Let's try again...

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_master_v1.patch, 
 HBASE-12229_master_v1.patch, HBASE-12229_master_v2.patch, 
 HBASE-12229_v1.patch, HBASE-12229_v2.patch, HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12192) Remove EventHandlerListener

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174561#comment-14174561
 ] 

Hadoop QA commented on HBASE-12192:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12675156/HBASE-12192.txt
  against trunk revision .
  ATTACHMENT ID: 12675156

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11384//console

This message is automatically generated.

 Remove EventHandlerListener
 ---

 Key: HBASE-12192
 URL: https://issues.apache.org/jira/browse/HBASE-12192
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: ryan rawson
Assignee: ryan rawson
 Attachments: HBASE-12192.txt


 EventHandlerListener isn't actually being used by internal HBase code right 
 now.  No one actually calls 'ExecutorService.registerListener()' according to 
 IntelliJ.
 It might be possible that some coprocessors use it. Perhaps people can 
 comment if they find this functionality useful or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174566#comment-14174566
 ] 

Hadoop QA commented on HBASE-12229:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675381/HBASE-12229_master_v1.patch
  against trunk revision .
  ATTACHMENT ID: 12675381

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
  
org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11385//console

This message is automatically generated.

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_master_v1.patch, 
 HBASE-12229_master_v1.patch, HBASE-12229_master_v2.patch, 
 HBASE-12229_v1.patch, HBASE-12229_v2.patch, HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12250) Adding an endpoint for updating the regionserver config

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174581#comment-14174581
 ] 

Hadoop QA commented on HBASE-12250:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675380/0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch
  against trunk revision .
  ATTACHMENT ID: 12675380

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   * coderpc UpdateConfiguration(.UpdateConfigurationRequest) 
returns (.UpdateConfigurationResponse);/code
+ * coderpc UpdateConfiguration(.UpdateConfigurationRequest) returns 
(.UpdateConfigurationResponse);/code

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11386//console

This message is automatically generated.

 Adding an endpoint for updating the regionserver config
 ---

 Key: HBASE-12250
 URL: https://issues.apache.org/jira/browse/HBASE-12250
 Project: HBase
  Issue Type: Task
  Components: regionserver
Affects Versions: 2.0.0
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
Priority: Minor
 Fix For: 2.0.0

 Attachments: 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-Add-admin-endpoint-for-updating-the-configuration-on.patch, 
 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch, 
 0001-HBASE-12250-Add-admin-endpoint-for-updating-the-conf.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 This is a follow up Jira that adds the end point for updating the 
 configuration on the regionserver. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-16 Thread Qiang Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174650#comment-14174650
 ] 

Qiang Tian commented on HBASE-12274:


Hi Ted,
I also ran mvn test with 0.98.6. I did not hit the scanner error, but did get 
some other strange failure. the UT looks not very clean.

in RS log, the lease failure looks not expected as well.

{code}
org.apache.hadoop.hbase.regionserver.LeaseException: lease '8' does not exist
at 
org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:221)
at 
org.apache.hadoop.hbase.regionserver.Leases.cancelLease(Leases.java:206)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3305)
{code}
it is from different rpc handler, just before NPE. 
we got NotServingRegionException? do we have more log?
thanks.




 Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() 
 may produce null pointer exception
 --

 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12274-region-server.log, 12274-v2.txt, 12274-v2.txt, 
 12274-v3.txt


 I saw the following in region server log:
 {code}
 2014-10-15 03:28:36,976 ERROR 
 [B.DefaultRpcServer.handler=0,queue=0,port=60020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 This is where the NPE happened:
 {code}
 // Let's see what we have in the storeHeap.
 KeyValue current = this.storeHeap.peek();
 {code}
 The cause was race between nextInternal(called through nextRaw) and close 
 methods.
 nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174663#comment-14174663
 ] 

Ted Yu commented on HBASE-12274:


There was no NotServingRegionException in the vicinity of NPE.

The log was result of integration test on cluster, not from unit test.

 Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() 
 may produce null pointer exception
 --

 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12274-region-server.log, 12274-v2.txt, 12274-v2.txt, 
 12274-v3.txt


 I saw the following in region server log:
 {code}
 2014-10-15 03:28:36,976 ERROR 
 [B.DefaultRpcServer.handler=0,queue=0,port=60020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 This is where the NPE happened:
 {code}
 // Let's see what we have in the storeHeap.
 KeyValue current = this.storeHeap.peek();
 {code}
 The cause was race between nextInternal(called through nextRaw) and close 
 methods.
 nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12229) NullPointerException in SnapshotTestingUtils

2014-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174671#comment-14174671
 ] 

Hadoop QA commented on HBASE-12229:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12675403/HBASE-12229_master_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12675403

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11387//console

This message is automatically generated.

 NullPointerException in SnapshotTestingUtils
 

 Key: HBASE-12229
 URL: https://issues.apache.org/jira/browse/HBASE-12229
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.7
Reporter: Dima Spivak
Assignee: Dima Spivak
Priority: Minor
 Attachments: HBASE-12229.patch, HBASE-12229_master_v1.patch, 
 HBASE-12229_master_v1.patch, HBASE-12229_master_v2.patch, 
 HBASE-12229_v1.patch, HBASE-12229_v2.patch, HBASE-12229_v2.patch


 I tracked down occasional flakiness in TestRestoreSnapshotFromClient to a 
 potential NPE in SnapshotTestingUtils#waitForTableToBeOnline. In short, some 
 tests in TestRestoreSnapshot... create a table and then invoke 
 SnapshotTestingUtils#waitForTableToBeOnline, but this method assumes that 
 regions have been assigned by the time it's invoked (which is not always the 
 case).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12274) Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception

2014-10-16 Thread Qiang Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174674#comment-14174674
 ] 

Qiang Tian commented on HBASE-12274:


Hi Ted,
perhaps I misunderstood. sorry for that. please go ahead.
thanks.


 Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() 
 may produce null pointer exception
 --

 Key: HBASE-12274
 URL: https://issues.apache.org/jira/browse/HBASE-12274
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.6.1
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.0.0, 0.98.8, 0.99.2

 Attachments: 12274-region-server.log, 12274-v2.txt, 12274-v2.txt, 
 12274-v3.txt


 I saw the following in region server log:
 {code}
 2014-10-15 03:28:36,976 ERROR 
 [B.DefaultRpcServer.handler=0,queue=0,port=60020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5023)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4932)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4923)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3245)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 This is where the NPE happened:
 {code}
 // Let's see what we have in the storeHeap.
 KeyValue current = this.storeHeap.peek();
 {code}
 The cause was race between nextInternal(called through nextRaw) and close 
 methods.
 nextRaw() is not synchronized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >