[jira] [Commented] (HBASE-13945) Prefix_Tree seekBefore() does not work correctly

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598932#comment-14598932
 ] 

Hudson commented on HBASE-13945:


FAILURE: Integrated in HBase-TRUNK #6597 (See 
[https://builds.apache.org/job/HBase-TRUNK/6597/])
HBASE-13945 - Prefix_Tree seekBefore() does not work correctly (Ram) 
(ramkrishna: rev d7356667be64b3244587b9fe0d8e3412e0b2b2c6)
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java


 Prefix_Tree seekBefore() does not work correctly
 

 Key: HBASE-13945
 URL: https://issues.apache.org/jira/browse/HBASE-13945
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 0.98.2, 1.0.1, 1.1.0, 1.0.1.1, 1.1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13945_0.98.patch, HBASE-13945_0.98_1.patch, 
 HBASE-13945_0.98_2.patch, HBASE-13945_0.98_3.patch, 
 HBASE-13945_branch-1.1.patch, HBASE-13945_trunk.patch, 
 HBASE-13945_trunk_1.patch, HBASE-13945_trunk_2.patch, 
 HBASE-13945_trunk_3.patch


 This is related to the TestSeekTo test case where the seekBefore() does not 
 work with Prefix_Tree because of an issue in getFirstKeyInBlock(). In the 
 trunk and branch-1 changing the return type of getFirstKeyInBlock() from BB 
 to Cell resolved the problem, but the same cannot be done in 0.98. Hence we 
 need a change in the KvUtil.copyToNewBuffer API to handle this.  Since the 
 limit is made as the position - in seekBefore when we do 
 {code}
 byte[] firstKeyInCurrentBlock = Bytes.getBytes(firstKey);
 {code}
 in HFileReaderV2.seekBefore() we end up in an empty byte array and it would 
 not be the expected one based on which we try to seek to load a new block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13948) Expand hadoop2 versions built on the pre-commit

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598933#comment-14598933
 ] 

Hudson commented on HBASE-13948:


FAILURE: Integrated in HBase-TRUNK #6597 (See 
[https://builds.apache.org/job/HBase-TRUNK/6597/])
HBASE-13948 Expand hadoop2 versions built on the pre-commit (addendum) 
(ndimiduk: rev 645d7ece127cdcbf10d54ad878ee965900cc27d0)
* dev-support/test-patch.properties


 Expand hadoop2 versions built on the pre-commit
 ---

 Key: HBASE-13948
 URL: https://issues.apache.org/jira/browse/HBASE-13948
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: 13948.patch, HBASE-13948-addendum.patch


 For the HBase 1.1 line I've been validating builds against the following 
 hadoop versions: 2.2.0 2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0. Let's 
 do the same in pre-commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13214) Remove deprecated and unused methods from HTable class

2015-06-24 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598963#comment-14598963
 ] 

Ashish Singhi commented on HBASE-13214:
---

[~anoop.hbase] do you have any more comments ?

 Remove deprecated and unused methods from HTable class
 --

 Key: HBASE-13214
 URL: https://issues.apache.org/jira/browse/HBASE-13214
 Project: HBase
  Issue Type: Sub-task
  Components: API
Affects Versions: 2.0.0
Reporter: Mikhail Antonov
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13214-v1.patch, HBASE-13214-v2-again-v1.patch, 
 HBASE-13214-v2-again.patch, HBASE-13214-v2.patch, HBASE-13214-v3.patch, 
 HBASE-13214.patch


 Methods like #getRegionLocation(), #isTableEnabled() etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13863) Multi-wal feature breaks reported number and size of HLogs

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599041#comment-14599041
 ] 

Hadoop QA commented on HBASE-13863:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741453/HBASE-13863-v1.patch
  against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741453

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.3.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[43,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[93,11]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[43,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[93,11]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14540//console

This message is automatically generated.

 Multi-wal feature breaks reported number and size of HLogs
 --

 Key: HBASE-13863
 URL: https://issues.apache.org/jira/browse/HBASE-13863
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Abhilash
 Attachments: HBASE-13863-v1.patch, HBASE-13863-v1.patch, 
 HBASE-13863.patch


 When multi-wal is enabled the number and size of retained HLogs is always 
 reported as zero.
 We should fix this so that the numbers are the sum of all retained logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13923) Loaded region coprocessors are not reported in shell status command

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599031#comment-14599031
 ] 

Hadoop QA commented on HBASE-13923:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12741451/HBASE-13923-branch-1.0.patch
  against branch-1.0 branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741451

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestAsyncProcess.testErrorsServers(TestAsyncProcess.java:816)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14537//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14537//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14537//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14537//console

This message is automatically generated.

 Loaded region coprocessors are not reported in shell status command
 ---

 Key: HBASE-13923
 URL: https://issues.apache.org/jira/browse/HBASE-13923
 Project: HBase
  Issue Type: Bug
  Components: regionserver, shell
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.2, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-13923-branch-1.0.patch, HBASE-13923-v1.patch, 
 HBASE-13923-v2.patch, HBASE-13923.patch


 I added a CP to a table using the shell's alter command. Now I tried to check 
 if it was loaded (short of resorting to parsing the logs). I recalled the 
 refguide mentioned the {{status 'detailed'}} command, and tried that to no 
 avail.
 The UI shows the loaded class in the Software Attributes section, so the info 
 is there. But a shell status command (even after waiting 12+ hours shows 
 nothing. Here an example of a server that has it loaded according to 
 {{describe}} and the UI, but the shell lists this:
 {noformat}
 slave-1.internal.larsgeorge.com:16020 1434486031598
 requestsPerSecond=0.0, numberOfOnlineRegions=5, usedHeapMB=278, 
 maxHeapMB=941, numberOfStores=5, numberOfStorefiles=3, 
 storefileUncompressedSizeMB=2454, storefileSizeMB=2454, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=32070, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=2086, totalStaticBloomSizeKB=480, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 coprocessors=[]
 testqauat:usertable,,1433747062257.4db0d7d73cbaac45cb8568d5b185e1f2.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user0,1433747062257.f7c7fe3c7d26910010f40101b20f8d06.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 

[jira] [Commented] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more day after they are expired

2015-06-24 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598910#comment-14598910
 ] 

Ashish Singhi commented on HBASE-13670:
---

Yes Anoop, Gururaj will be posting a patch by today.
Thanks

 [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more 
 day after they are expired
 --

 Key: HBASE-13670
 URL: https://issues.apache.org/jira/browse/HBASE-13670
 Project: HBase
  Issue Type: Improvement
  Components: documentation, mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
Assignee: Gururaj Shetty
 Fix For: hbase-11339


 Currently the ExpiredMobFileCleaner cleans the expired mob file according to 
 the date in the mob file name. The minimum unit of the date is day. So 
 ExpiredMobFileCleaner only cleans the expired mob files later for one more 
 day after they are expired. We need to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13939) Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell

2015-06-24 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13939:
---
Status: Open  (was: Patch Available)

 Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell
 --

 Key: HBASE-13939
 URL: https://issues.apache.org/jira/browse/HBASE-13939
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0, 1.1.2

 Attachments: HBASE-13939.patch, HBASE-13939_1.patch, 
 HBASE-13939_2.patch, HBASE-13939_branch-1.1.patch


 The getFirstKeyInBlock() in HFileReaderImpl is returning a BB. It is getting 
 used in seekBefore cases.  Because we return a BB we create a KeyOnlyKV once 
 for comparison
 {code}
   if (reader.getComparator()
   .compareKeyIgnoresMvcc(
   new KeyValue.KeyOnlyKeyValue(firstKey.array(), 
 firstKey.arrayOffset(),
   firstKey.limit()), key) = 0) {
 long previousBlockOffset = seekToBlock.getPrevBlockOffset();
 // The key we are interested in
 if (previousBlockOffset == -1) {
   // we have a 'problem', the key we want is the first of the file.
   return false;
 }
 
 {code}
 And if the compare fails we again create another KeyOnlyKv 
 {code}
   Cell firstKeyInCurrentBlock = new 
 KeyValue.KeyOnlyKeyValue(Bytes.getBytes(firstKey));
   loadBlockAndSeekToKey(seekToBlock, firstKeyInCurrentBlock, true, key, 
 true);
 {code}
 So one object will be enough and that can be returned by getFirstKeyInBlock. 
 Also will be useful when we go with Buffered backed server cell to change in 
 one place. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13920) Exclude Java files generated from protobuf from javadoc

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599099#comment-14599099
 ] 

Hudson commented on HBASE-13920:


FAILURE: Integrated in HBase-1.2 #30 (See 
[https://builds.apache.org/job/HBase-1.2/30/])
HBASE-13920 Exclude org.apache.hadoop.hbase.protobuf.generated from javadoc 
generation (busbey: rev 0e3fa8a40503e1dce95ed9eab0a2aaa6676e1789)
* pom.xml


 Exclude Java files generated from protobuf from javadoc
 ---

 Key: HBASE-13920
 URL: https://issues.apache.org/jira/browse/HBASE-13920
 Project: HBase
  Issue Type: Sub-task
Reporter: Gabor Liptak
Assignee: Gabor Liptak
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-13920.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13835) KeyValueHeap.current might be in heap when exception happens in pollRealKV

2015-06-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598985#comment-14598985
 ] 

Anoop Sam John commented on HBASE-13835:


{quote}
-1 overall. Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12741452/HBASE-13835-002.patch
against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
ATTACHMENT ID: 12741452
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
-1 javac. The patch appears to cause mvn compile goal to fail with Hadoop 
version 2.3.0.
Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
{quote}
Compile master against hadoop 2.3.0 is newly added?   We support 2.4 + 
versions?  Ping [~ndimiduk] 

 KeyValueHeap.current might be in heap when exception happens in pollRealKV
 --

 Key: HBASE-13835
 URL: https://issues.apache.org/jira/browse/HBASE-13835
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HBASE-13835-001.patch, HBASE-13835-002.patch, 
 HBASE-13835-002.patch


 In a 0.94 hbase cluster, we found a NPE with following stack:
 {code}
 Exception in thread regionserver21600.leaseChecker 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1530)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:201)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:191)
 at 
 java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641)
 at java.util.PriorityQueue.siftDown(PriorityQueue.java:612)
 at java.util.PriorityQueue.poll(PriorityQueue.java:523)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:241)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:355)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:237)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:4302)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer$ScannerListener.leaseExpired(HRegionServer.java:3033)
 at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:119)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Before this NPE exception, there is an exception happens in pollRealKV, which 
 we think is the culprit of the NPE.
 {code}
 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
 java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
 reader reader=
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:180)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:371)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:366)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:116)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:455)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4124)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4196)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4067)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4057)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.internalNext(HRegionServer.java:2898)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2833)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2815)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:337)
 at 
 

[jira] [Commented] (HBASE-13923) Loaded region coprocessors are not reported in shell status command

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599100#comment-14599100
 ] 

Hudson commented on HBASE-13923:


FAILURE: Integrated in HBase-1.2 #30 (See 
[https://builds.apache.org/job/HBase-1.2/30/])
HBASE-13923 Loaded region coprocessors are not reported in shell status command 
(Ashish Singhi) (tedyu: rev bc7dfe9edd2f40279903a038137eeb11b46bd5f3)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java


 Loaded region coprocessors are not reported in shell status command
 ---

 Key: HBASE-13923
 URL: https://issues.apache.org/jira/browse/HBASE-13923
 Project: HBase
  Issue Type: Bug
  Components: regionserver, shell
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.2, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-13923-branch-1.0.patch, HBASE-13923-v1.patch, 
 HBASE-13923-v2.patch, HBASE-13923.patch


 I added a CP to a table using the shell's alter command. Now I tried to check 
 if it was loaded (short of resorting to parsing the logs). I recalled the 
 refguide mentioned the {{status 'detailed'}} command, and tried that to no 
 avail.
 The UI shows the loaded class in the Software Attributes section, so the info 
 is there. But a shell status command (even after waiting 12+ hours shows 
 nothing. Here an example of a server that has it loaded according to 
 {{describe}} and the UI, but the shell lists this:
 {noformat}
 slave-1.internal.larsgeorge.com:16020 1434486031598
 requestsPerSecond=0.0, numberOfOnlineRegions=5, usedHeapMB=278, 
 maxHeapMB=941, numberOfStores=5, numberOfStorefiles=3, 
 storefileUncompressedSizeMB=2454, storefileSizeMB=2454, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=32070, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=2086, totalStaticBloomSizeKB=480, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 coprocessors=[]
 testqauat:usertable,,1433747062257.4db0d7d73cbaac45cb8568d5b185e1f2.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user0,1433747062257.f7c7fe3c7d26910010f40101b20f8d06.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user1,1433747062257.dcd5395044732242dfed39b09aa05c36.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=820, lastMajorCompactionTimestamp=1434173025593, 
 storefileSizeMB=820, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=32070, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=699, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user7,1433747062257.9277fd1d34909b0cb150707cbd7a3907.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=816, lastMajorCompactionTimestamp=1434283025585, 
 storefileSizeMB=816, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=690, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user8,1433747062257.d930b52db8c7f07f3c3ab3e12e61a085.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=818, lastMajorCompactionTimestamp=1433771950960, 
 storefileSizeMB=818, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=697, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 {noformat}
 The refguide 

[jira] [Commented] (HBASE-13923) Loaded region coprocessors are not reported in shell status command

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599122#comment-14599122
 ] 

Hudson commented on HBASE-13923:


FAILURE: Integrated in HBase-1.0 #973 (See 
[https://builds.apache.org/job/HBase-1.0/973/])
HBASE-13923 Loaded region coprocessors are not reported in shell status command 
(Ashish Singhi) (tedyu: rev f94e8b9eb6896d949e4398caca73792db8cf28ee)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java


 Loaded region coprocessors are not reported in shell status command
 ---

 Key: HBASE-13923
 URL: https://issues.apache.org/jira/browse/HBASE-13923
 Project: HBase
  Issue Type: Bug
  Components: regionserver, shell
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.2, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-13923-branch-1.0.patch, HBASE-13923-v1.patch, 
 HBASE-13923-v2.patch, HBASE-13923.patch


 I added a CP to a table using the shell's alter command. Now I tried to check 
 if it was loaded (short of resorting to parsing the logs). I recalled the 
 refguide mentioned the {{status 'detailed'}} command, and tried that to no 
 avail.
 The UI shows the loaded class in the Software Attributes section, so the info 
 is there. But a shell status command (even after waiting 12+ hours shows 
 nothing. Here an example of a server that has it loaded according to 
 {{describe}} and the UI, but the shell lists this:
 {noformat}
 slave-1.internal.larsgeorge.com:16020 1434486031598
 requestsPerSecond=0.0, numberOfOnlineRegions=5, usedHeapMB=278, 
 maxHeapMB=941, numberOfStores=5, numberOfStorefiles=3, 
 storefileUncompressedSizeMB=2454, storefileSizeMB=2454, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=32070, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=2086, totalStaticBloomSizeKB=480, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 coprocessors=[]
 testqauat:usertable,,1433747062257.4db0d7d73cbaac45cb8568d5b185e1f2.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user0,1433747062257.f7c7fe3c7d26910010f40101b20f8d06.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user1,1433747062257.dcd5395044732242dfed39b09aa05c36.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=820, lastMajorCompactionTimestamp=1434173025593, 
 storefileSizeMB=820, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=32070, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=699, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user7,1433747062257.9277fd1d34909b0cb150707cbd7a3907.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=816, lastMajorCompactionTimestamp=1434283025585, 
 storefileSizeMB=816, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=690, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user8,1433747062257.d930b52db8c7f07f3c3ab3e12e61a085.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=818, lastMajorCompactionTimestamp=1433771950960, 
 storefileSizeMB=818, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=697, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 {noformat}
 The refguide 

[jira] [Commented] (HBASE-13959) Region splitting takes too long because it uses a single thread in most common cases

2015-06-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598919#comment-14598919
 ] 

Anoop Sam John commented on HBASE-13959:


So now each thread is handling one CF.  We need change the threading model and 
its work assignment.  The model should be based on store file count (across 
stores)  and each thread handling a group of store files.  You will provide a 
patch?

 Region splitting takes too long because it uses a single thread in most 
 common cases
 

 Key: HBASE-13959
 URL: https://issues.apache.org/jira/browse/HBASE-13959
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.12
Reporter: Hari Krishna Dara
Assignee: Hari Krishna Dara

 When storefiles need to be split as part of a region split, the current logic 
 uses a threadpool with the size set to the size of the number of stores. 
 Since most common table setup involves only a single column family, this 
 translates to having a single store and so the threadpool is run with a 
 single thread. However, in a write heavy workload, there could be several 
 tens of storefiles in a store at the time of splitting, and with a threadpool 
 size of one, these files end up getting split sequentially.
 With a bit of tracing, I noticed that it takes on an average of 350ms to 
 create a single reference file, and splitting each storefile involves 
 creating two of these, so with a storefile count of 20, it takes about 14s 
 just to get through this phase alone (2 reference files for each storefile), 
 pushing the total time the region is offline to 18s or more. For environments 
 that are setup to fail fast, this makes the client exhaust all retries and 
 fail with NotServingRegionException.
 The fix should increase the concurrency of this operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13920) Exclude Java files generated from protobuf from javadoc

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598931#comment-14598931
 ] 

Hudson commented on HBASE-13920:


FAILURE: Integrated in HBase-TRUNK #6597 (See 
[https://builds.apache.org/job/HBase-TRUNK/6597/])
HBASE-13920 Exclude org.apache.hadoop.hbase.protobuf.generated from javadoc 
generation (busbey: rev 76d6700d2316fe761473f559db64708e7025850d)
* pom.xml


 Exclude Java files generated from protobuf from javadoc
 ---

 Key: HBASE-13920
 URL: https://issues.apache.org/jira/browse/HBASE-13920
 Project: HBase
  Issue Type: Sub-task
Reporter: Gabor Liptak
Assignee: Gabor Liptak
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-13920.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13923) Loaded region coprocessors are not reported in shell status command

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598965#comment-14598965
 ] 

Hudson commented on HBASE-13923:


SUCCESS: Integrated in HBase-1.1 #556 (See 
[https://builds.apache.org/job/HBase-1.1/556/])
HBASE-13923 Loaded region coprocessors are not reported in shell status command 
(Ashish Singhi) (tedyu: rev 23b78587e98a07b9e1c535d9f45cf99882e6a8cb)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


 Loaded region coprocessors are not reported in shell status command
 ---

 Key: HBASE-13923
 URL: https://issues.apache.org/jira/browse/HBASE-13923
 Project: HBase
  Issue Type: Bug
  Components: regionserver, shell
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.2, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-13923-branch-1.0.patch, HBASE-13923-v1.patch, 
 HBASE-13923-v2.patch, HBASE-13923.patch


 I added a CP to a table using the shell's alter command. Now I tried to check 
 if it was loaded (short of resorting to parsing the logs). I recalled the 
 refguide mentioned the {{status 'detailed'}} command, and tried that to no 
 avail.
 The UI shows the loaded class in the Software Attributes section, so the info 
 is there. But a shell status command (even after waiting 12+ hours shows 
 nothing. Here an example of a server that has it loaded according to 
 {{describe}} and the UI, but the shell lists this:
 {noformat}
 slave-1.internal.larsgeorge.com:16020 1434486031598
 requestsPerSecond=0.0, numberOfOnlineRegions=5, usedHeapMB=278, 
 maxHeapMB=941, numberOfStores=5, numberOfStorefiles=3, 
 storefileUncompressedSizeMB=2454, storefileSizeMB=2454, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=32070, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=2086, totalStaticBloomSizeKB=480, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 coprocessors=[]
 testqauat:usertable,,1433747062257.4db0d7d73cbaac45cb8568d5b185e1f2.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user0,1433747062257.f7c7fe3c7d26910010f40101b20f8d06.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user1,1433747062257.dcd5395044732242dfed39b09aa05c36.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=820, lastMajorCompactionTimestamp=1434173025593, 
 storefileSizeMB=820, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=32070, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=699, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user7,1433747062257.9277fd1d34909b0cb150707cbd7a3907.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=816, lastMajorCompactionTimestamp=1434283025585, 
 storefileSizeMB=816, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=690, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user8,1433747062257.d930b52db8c7f07f3c3ab3e12e61a085.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=818, lastMajorCompactionTimestamp=1433771950960, 
 storefileSizeMB=818, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=697, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 {noformat}
 The refguide 

[jira] [Commented] (HBASE-13835) KeyValueHeap.current might be in heap when exception happens in pollRealKV

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598966#comment-14598966
 ] 

Hadoop QA commented on HBASE-13835:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741452/HBASE-13835-002.patch
  against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741452

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.3.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14538//console

This message is automatically generated.

 KeyValueHeap.current might be in heap when exception happens in pollRealKV
 --

 Key: HBASE-13835
 URL: https://issues.apache.org/jira/browse/HBASE-13835
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HBASE-13835-001.patch, HBASE-13835-002.patch, 
 HBASE-13835-002.patch


 In a 0.94 hbase cluster, we found a NPE with following stack:
 {code}
 Exception in thread regionserver21600.leaseChecker 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1530)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:225)
 at 
 

[jira] [Commented] (HBASE-13945) Prefix_Tree seekBefore() does not work correctly

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598964#comment-14598964
 ] 

Hudson commented on HBASE-13945:


SUCCESS: Integrated in HBase-1.1 #556 (See 
[https://builds.apache.org/job/HBase-1.1/556/])
HBASE-13945 - Prefix_Tree seekBefore() does not work correctly (Ram) 
(ramkrishna: rev 1b5f72e1f52a82acad424a702e76091a3acd90c6)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java


 Prefix_Tree seekBefore() does not work correctly
 

 Key: HBASE-13945
 URL: https://issues.apache.org/jira/browse/HBASE-13945
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 0.98.2, 1.0.1, 1.1.0, 1.0.1.1, 1.1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13945_0.98.patch, HBASE-13945_0.98_1.patch, 
 HBASE-13945_0.98_2.patch, HBASE-13945_0.98_3.patch, 
 HBASE-13945_branch-1.1.patch, HBASE-13945_trunk.patch, 
 HBASE-13945_trunk_1.patch, HBASE-13945_trunk_2.patch, 
 HBASE-13945_trunk_3.patch


 This is related to the TestSeekTo test case where the seekBefore() does not 
 work with Prefix_Tree because of an issue in getFirstKeyInBlock(). In the 
 trunk and branch-1 changing the return type of getFirstKeyInBlock() from BB 
 to Cell resolved the problem, but the same cannot be done in 0.98. Hence we 
 need a change in the KvUtil.copyToNewBuffer API to handle this.  Since the 
 limit is made as the position - in seekBefore when we do 
 {code}
 byte[] firstKeyInCurrentBlock = Bytes.getBytes(firstKey);
 {code}
 in HFileReaderV2.seekBefore() we end up in an empty byte array and it would 
 not be the expected one based on which we try to seek to load a new block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13945) Prefix_Tree seekBefore() does not work correctly

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599003#comment-14599003
 ] 

Hudson commented on HBASE-13945:


FAILURE: Integrated in HBase-0.98 #1037 (See 
[https://builds.apache.org/job/HBase-0.98/1037/])
HBASE-13945 - Prefix_Tree seekBefore() does not work correctly (Ram) 
(ramkrishna: rev 5467da9fb96b6ba49af60312426aee8fb8efc93b)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java


 Prefix_Tree seekBefore() does not work correctly
 

 Key: HBASE-13945
 URL: https://issues.apache.org/jira/browse/HBASE-13945
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 0.98.2, 1.0.1, 1.1.0, 1.0.1.1, 1.1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13945_0.98.patch, HBASE-13945_0.98_1.patch, 
 HBASE-13945_0.98_2.patch, HBASE-13945_0.98_3.patch, 
 HBASE-13945_branch-1.1.patch, HBASE-13945_trunk.patch, 
 HBASE-13945_trunk_1.patch, HBASE-13945_trunk_2.patch, 
 HBASE-13945_trunk_3.patch


 This is related to the TestSeekTo test case where the seekBefore() does not 
 work with Prefix_Tree because of an issue in getFirstKeyInBlock(). In the 
 trunk and branch-1 changing the return type of getFirstKeyInBlock() from BB 
 to Cell resolved the problem, but the same cannot be done in 0.98. Hence we 
 need a change in the KvUtil.copyToNewBuffer API to handle this.  Since the 
 limit is made as the position - in seekBefore when we do 
 {code}
 byte[] firstKeyInCurrentBlock = Bytes.getBytes(firstKey);
 {code}
 in HFileReaderV2.seekBefore() we end up in an empty byte array and it would 
 not be the expected one based on which we try to seek to load a new block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13959) Region splitting takes too long because it uses a single thread in most common cases

2015-06-24 Thread Hari Krishna Dara (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599092#comment-14599092
 ] 

Hari Krishna Dara commented on HBASE-13959:
---

The storefiles can be handled in any order, as the splits are independent. A 
quickfix is to increase the pool size with a configurable upper limit. I made a 
patch and in the process of testing and doing some analysis. I will submit it 
soon.

 Region splitting takes too long because it uses a single thread in most 
 common cases
 

 Key: HBASE-13959
 URL: https://issues.apache.org/jira/browse/HBASE-13959
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.12
Reporter: Hari Krishna Dara
Assignee: Hari Krishna Dara

 When storefiles need to be split as part of a region split, the current logic 
 uses a threadpool with the size set to the size of the number of stores. 
 Since most common table setup involves only a single column family, this 
 translates to having a single store and so the threadpool is run with a 
 single thread. However, in a write heavy workload, there could be several 
 tens of storefiles in a store at the time of splitting, and with a threadpool 
 size of one, these files end up getting split sequentially.
 With a bit of tracing, I noticed that it takes on an average of 350ms to 
 create a single reference file, and splitting each storefile involves 
 creating two of these, so with a storefile count of 20, it takes about 14s 
 just to get through this phase alone (2 reference files for each storefile), 
 pushing the total time the region is offline to 18s or more. For environments 
 that are setup to fail fast, this makes the client exhaust all retries and 
 fail with NotServingRegionException.
 The fix should increase the concurrency of this operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13923) Loaded region coprocessors are not reported in shell status command

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13923:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the patch, Ashish.

 Loaded region coprocessors are not reported in shell status command
 ---

 Key: HBASE-13923
 URL: https://issues.apache.org/jira/browse/HBASE-13923
 Project: HBase
  Issue Type: Bug
  Components: regionserver, shell
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.2, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-13923-branch-1.0.patch, HBASE-13923-v1.patch, 
 HBASE-13923-v2.patch, HBASE-13923.patch


 I added a CP to a table using the shell's alter command. Now I tried to check 
 if it was loaded (short of resorting to parsing the logs). I recalled the 
 refguide mentioned the {{status 'detailed'}} command, and tried that to no 
 avail.
 The UI shows the loaded class in the Software Attributes section, so the info 
 is there. But a shell status command (even after waiting 12+ hours shows 
 nothing. Here an example of a server that has it loaded according to 
 {{describe}} and the UI, but the shell lists this:
 {noformat}
 slave-1.internal.larsgeorge.com:16020 1434486031598
 requestsPerSecond=0.0, numberOfOnlineRegions=5, usedHeapMB=278, 
 maxHeapMB=941, numberOfStores=5, numberOfStorefiles=3, 
 storefileUncompressedSizeMB=2454, storefileSizeMB=2454, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=32070, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=2086, totalStaticBloomSizeKB=480, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 coprocessors=[]
 testqauat:usertable,,1433747062257.4db0d7d73cbaac45cb8568d5b185e1f2.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user0,1433747062257.f7c7fe3c7d26910010f40101b20f8d06.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user1,1433747062257.dcd5395044732242dfed39b09aa05c36.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=820, lastMajorCompactionTimestamp=1434173025593, 
 storefileSizeMB=820, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=32070, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=699, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user7,1433747062257.9277fd1d34909b0cb150707cbd7a3907.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=816, lastMajorCompactionTimestamp=1434283025585, 
 storefileSizeMB=816, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=690, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user8,1433747062257.d930b52db8c7f07f3c3ab3e12e61a085.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=818, lastMajorCompactionTimestamp=1433771950960, 
 storefileSizeMB=818, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=697, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 {noformat}
 The refguide shows an example of an older HBase version that has the CP class 
 listed properly. Something is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13939) Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599015#comment-14599015
 ] 

Hadoop QA commented on HBASE-13939:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741457/HBASE-13939_3.patch
  against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741457

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.3.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14539//console

This message is automatically generated.

 Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell
 --

 Key: HBASE-13939
 URL: https://issues.apache.org/jira/browse/HBASE-13939
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0, 1.1.2

 Attachments: HBASE-13939.patch, HBASE-13939_1.patch, 
 HBASE-13939_2.patch, HBASE-13939_3.patch, HBASE-13939_branch-1.1.patch


 The getFirstKeyInBlock() in HFileReaderImpl is returning a BB. It is getting 
 used in seekBefore cases.  Because we return a BB we create a KeyOnlyKV once 
 for comparison
 {code}
   if (reader.getComparator()
   .compareKeyIgnoresMvcc(
   new KeyValue.KeyOnlyKeyValue(firstKey.array(), 
 firstKey.arrayOffset(),
   firstKey.limit()), key) = 0) 

[jira] [Commented] (HBASE-13923) Loaded region coprocessors are not reported in shell status command

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599173#comment-14599173
 ] 

Hudson commented on HBASE-13923:


FAILURE: Integrated in HBase-1.3 #16 (See 
[https://builds.apache.org/job/HBase-1.3/16/])
HBASE-13923 Loaded region coprocessors are not reported in shell status command 
(Ashish Singhi) (tedyu: rev 41aa8412410fec33242afa40bb7c52c511eb30e2)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java


 Loaded region coprocessors are not reported in shell status command
 ---

 Key: HBASE-13923
 URL: https://issues.apache.org/jira/browse/HBASE-13923
 Project: HBase
  Issue Type: Bug
  Components: regionserver, shell
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.2, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-13923-branch-1.0.patch, HBASE-13923-v1.patch, 
 HBASE-13923-v2.patch, HBASE-13923.patch


 I added a CP to a table using the shell's alter command. Now I tried to check 
 if it was loaded (short of resorting to parsing the logs). I recalled the 
 refguide mentioned the {{status 'detailed'}} command, and tried that to no 
 avail.
 The UI shows the loaded class in the Software Attributes section, so the info 
 is there. But a shell status command (even after waiting 12+ hours shows 
 nothing. Here an example of a server that has it loaded according to 
 {{describe}} and the UI, but the shell lists this:
 {noformat}
 slave-1.internal.larsgeorge.com:16020 1434486031598
 requestsPerSecond=0.0, numberOfOnlineRegions=5, usedHeapMB=278, 
 maxHeapMB=941, numberOfStores=5, numberOfStorefiles=3, 
 storefileUncompressedSizeMB=2454, storefileSizeMB=2454, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=32070, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=2086, totalStaticBloomSizeKB=480, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 coprocessors=[]
 testqauat:usertable,,1433747062257.4db0d7d73cbaac45cb8568d5b185e1f2.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user0,1433747062257.f7c7fe3c7d26910010f40101b20f8d06.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user1,1433747062257.dcd5395044732242dfed39b09aa05c36.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=820, lastMajorCompactionTimestamp=1434173025593, 
 storefileSizeMB=820, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=32070, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=699, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user7,1433747062257.9277fd1d34909b0cb150707cbd7a3907.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=816, lastMajorCompactionTimestamp=1434283025585, 
 storefileSizeMB=816, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=690, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user8,1433747062257.d930b52db8c7f07f3c3ab3e12e61a085.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=818, lastMajorCompactionTimestamp=1433771950960, 
 storefileSizeMB=818, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=697, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 {noformat}
 The refguide 

[jira] [Updated] (HBASE-13863) Multi-wal feature breaks reported number and size of HLogs

2015-06-24 Thread Abhilash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhilash updated HBASE-13863:
-
Attachment: HBASE-13863-v1.patch

 Multi-wal feature breaks reported number and size of HLogs
 --

 Key: HBASE-13863
 URL: https://issues.apache.org/jira/browse/HBASE-13863
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Abhilash
 Attachments: HBASE-13863-v1.patch, HBASE-13863-v1.patch, 
 HBASE-13863.patch


 When multi-wal is enabled the number and size of retained HLogs is always 
 reported as zero.
 We should fix this so that the numbers are the sum of all retained logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13945) Prefix_Tree seekBefore() does not work correctly

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598968#comment-14598968
 ] 

Hudson commented on HBASE-13945:


FAILURE: Integrated in HBase-1.3 #15 (See 
[https://builds.apache.org/job/HBase-1.3/15/])
HBASE-13945 - Prefix_Tree seekBefore() does not work correctly (Ram) 
(ramkrishna: rev 92f4e30f458a2a9d0e2aec27da36aa84cdb9fa2f)
* hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java


 Prefix_Tree seekBefore() does not work correctly
 

 Key: HBASE-13945
 URL: https://issues.apache.org/jira/browse/HBASE-13945
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0, 0.98.2, 1.0.1, 1.1.0, 1.0.1.1, 1.1.0.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13945_0.98.patch, HBASE-13945_0.98_1.patch, 
 HBASE-13945_0.98_2.patch, HBASE-13945_0.98_3.patch, 
 HBASE-13945_branch-1.1.patch, HBASE-13945_trunk.patch, 
 HBASE-13945_trunk_1.patch, HBASE-13945_trunk_2.patch, 
 HBASE-13945_trunk_3.patch


 This is related to the TestSeekTo test case where the seekBefore() does not 
 work with Prefix_Tree because of an issue in getFirstKeyInBlock(). In the 
 trunk and branch-1 changing the return type of getFirstKeyInBlock() from BB 
 to Cell resolved the problem, but the same cannot be done in 0.98. Hence we 
 need a change in the KvUtil.copyToNewBuffer API to handle this.  Since the 
 limit is made as the position - in seekBefore when we do 
 {code}
 byte[] firstKeyInCurrentBlock = Bytes.getBytes(firstKey);
 {code}
 in HFileReaderV2.seekBefore() we end up in an empty byte array and it would 
 not be the expected one based on which we try to seek to load a new block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13920) Exclude Java files generated from protobuf from javadoc

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598967#comment-14598967
 ] 

Hudson commented on HBASE-13920:


FAILURE: Integrated in HBase-1.3 #15 (See 
[https://builds.apache.org/job/HBase-1.3/15/])
HBASE-13920 Exclude org.apache.hadoop.hbase.protobuf.generated from javadoc 
generation (busbey: rev 578e34aa1bf85e273249255fef46b7f0fff997fe)
* pom.xml


 Exclude Java files generated from protobuf from javadoc
 ---

 Key: HBASE-13920
 URL: https://issues.apache.org/jira/browse/HBASE-13920
 Project: HBase
  Issue Type: Sub-task
Reporter: Gabor Liptak
Assignee: Gabor Liptak
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-13920.1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13939) Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell

2015-06-24 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13939:
---
Status: Patch Available  (was: Open)

 Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell
 --

 Key: HBASE-13939
 URL: https://issues.apache.org/jira/browse/HBASE-13939
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0, 1.1.2

 Attachments: HBASE-13939.patch, HBASE-13939_1.patch, 
 HBASE-13939_2.patch, HBASE-13939_3.patch, HBASE-13939_branch-1.1.patch


 The getFirstKeyInBlock() in HFileReaderImpl is returning a BB. It is getting 
 used in seekBefore cases.  Because we return a BB we create a KeyOnlyKV once 
 for comparison
 {code}
   if (reader.getComparator()
   .compareKeyIgnoresMvcc(
   new KeyValue.KeyOnlyKeyValue(firstKey.array(), 
 firstKey.arrayOffset(),
   firstKey.limit()), key) = 0) {
 long previousBlockOffset = seekToBlock.getPrevBlockOffset();
 // The key we are interested in
 if (previousBlockOffset == -1) {
   // we have a 'problem', the key we want is the first of the file.
   return false;
 }
 
 {code}
 And if the compare fails we again create another KeyOnlyKv 
 {code}
   Cell firstKeyInCurrentBlock = new 
 KeyValue.KeyOnlyKeyValue(Bytes.getBytes(firstKey));
   loadBlockAndSeekToKey(seekToBlock, firstKeyInCurrentBlock, true, key, 
 true);
 {code}
 So one object will be enough and that can be returned by getFirstKeyInBlock. 
 Also will be useful when we go with Buffered backed server cell to change in 
 one place. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13939) Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell

2015-06-24 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13939:
---
Attachment: HBASE-13939_3.patch

Updated patch for trunk. As the seekBefore() issue has been handled by 
HBASE-13945- this change can go in trunk alone. Changed getFirstKeyInBlock to 
getFirstKeyCellInBlock as the return type is changed.
This is particularly useful in case of Prefix tree where we already have the 
first cell in the form of a cell in Prefix tree where as in other cases we need 
to create one such cell from the BB. If we don do this, then in case of 
PrefixTree we have to copy from the cell which we already have to a BB.  In 
other DBE cases it is only a object creation.

 Make HFileReaderImpl.getFirstKeyInBlock() to return a Cell
 --

 Key: HBASE-13939
 URL: https://issues.apache.org/jira/browse/HBASE-13939
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 2.0.0, 1.1.2

 Attachments: HBASE-13939.patch, HBASE-13939_1.patch, 
 HBASE-13939_2.patch, HBASE-13939_3.patch, HBASE-13939_branch-1.1.patch


 The getFirstKeyInBlock() in HFileReaderImpl is returning a BB. It is getting 
 used in seekBefore cases.  Because we return a BB we create a KeyOnlyKV once 
 for comparison
 {code}
   if (reader.getComparator()
   .compareKeyIgnoresMvcc(
   new KeyValue.KeyOnlyKeyValue(firstKey.array(), 
 firstKey.arrayOffset(),
   firstKey.limit()), key) = 0) {
 long previousBlockOffset = seekToBlock.getPrevBlockOffset();
 // The key we are interested in
 if (previousBlockOffset == -1) {
   // we have a 'problem', the key we want is the first of the file.
   return false;
 }
 
 {code}
 And if the compare fails we again create another KeyOnlyKv 
 {code}
   Cell firstKeyInCurrentBlock = new 
 KeyValue.KeyOnlyKeyValue(Bytes.getBytes(firstKey));
   loadBlockAndSeekToKey(seekToBlock, firstKeyInCurrentBlock, true, key, 
 true);
 {code}
 So one object will be enough and that can be returned by getFirstKeyInBlock. 
 Also will be useful when we go with Buffered backed server cell to change in 
 one place. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13863) Multi-wal feature breaks reported number and size of HLogs

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13863:
---
Status: Patch Available  (was: Open)

 Multi-wal feature breaks reported number and size of HLogs
 --

 Key: HBASE-13863
 URL: https://issues.apache.org/jira/browse/HBASE-13863
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Abhilash
 Attachments: HBASE-13863-v1.patch, HBASE-13863-v1.patch, 
 HBASE-13863.patch


 When multi-wal is enabled the number and size of retained HLogs is always 
 reported as zero.
 We should fix this so that the numbers are the sum of all retained logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13923) Loaded region coprocessors are not reported in shell status command

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599165#comment-14599165
 ] 

Hudson commented on HBASE-13923:


FAILURE: Integrated in HBase-TRUNK #6598 (See 
[https://builds.apache.org/job/HBase-TRUNK/6598/])
HBASE-13923 Loaded region coprocessors are not reported in shell status command 
(Ashish Singhi) (tedyu: rev b7f241d73b79ec22db2c03cb6b384b76185f0f85)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


 Loaded region coprocessors are not reported in shell status command
 ---

 Key: HBASE-13923
 URL: https://issues.apache.org/jira/browse/HBASE-13923
 Project: HBase
  Issue Type: Bug
  Components: regionserver, shell
Affects Versions: 1.1.0.1
Reporter: Lars George
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.0.2, 1.1.2, 1.3.0, 1.2.1

 Attachments: HBASE-13923-branch-1.0.patch, HBASE-13923-v1.patch, 
 HBASE-13923-v2.patch, HBASE-13923.patch


 I added a CP to a table using the shell's alter command. Now I tried to check 
 if it was loaded (short of resorting to parsing the logs). I recalled the 
 refguide mentioned the {{status 'detailed'}} command, and tried that to no 
 avail.
 The UI shows the loaded class in the Software Attributes section, so the info 
 is there. But a shell status command (even after waiting 12+ hours shows 
 nothing. Here an example of a server that has it loaded according to 
 {{describe}} and the UI, but the shell lists this:
 {noformat}
 slave-1.internal.larsgeorge.com:16020 1434486031598
 requestsPerSecond=0.0, numberOfOnlineRegions=5, usedHeapMB=278, 
 maxHeapMB=941, numberOfStores=5, numberOfStorefiles=3, 
 storefileUncompressedSizeMB=2454, storefileSizeMB=2454, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=32070, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=2086, totalStaticBloomSizeKB=480, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 coprocessors=[]
 testqauat:usertable,,1433747062257.4db0d7d73cbaac45cb8568d5b185e1f2.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user0,1433747062257.f7c7fe3c7d26910010f40101b20f8d06.
 numberOfStores=1, numberOfStorefiles=0, 
 storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, 
 storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, 
 totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, 
 dataLocality=0.0
 
 testqauat:usertable,user1,1433747062257.dcd5395044732242dfed39b09aa05c36.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=820, lastMajorCompactionTimestamp=1434173025593, 
 storefileSizeMB=820, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=32070, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=699, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user7,1433747062257.9277fd1d34909b0cb150707cbd7a3907.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=816, lastMajorCompactionTimestamp=1434283025585, 
 storefileSizeMB=816, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=690, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 
 testqauat:usertable,user8,1433747062257.d930b52db8c7f07f3c3ab3e12e61a085.
 numberOfStores=1, numberOfStorefiles=1, 
 storefileUncompressedSizeMB=818, lastMajorCompactionTimestamp=1433771950960, 
 storefileSizeMB=818, compressionRatio=1., memstoreSizeMB=0, 
 storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, 
 rootIndexSizeKB=0, totalStaticIndexSizeKB=697, totalStaticBloomSizeKB=160, 
 totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, 
 completeSequenceId=-1, dataLocality=1.0
 {noformat}
 The 

[jira] [Commented] (HBASE-13814) AssignmentManager does not write the correct server name into Zookeeper when unassign region

2015-06-24 Thread cuijianwei (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599244#comment-14599244
 ] 

cuijianwei commented on HBASE-13814:


Thanks for your concern [~lhofhansl] and sorry to reply late:). Yes, 
master.getServerName() is not the correct server name serving the region and I 
think we need to save the right region server name into znode in 
AssignmentManager#unassign so that AssignmentManager#isCarryingRegion will get 
the right result. 

{quote}
I think we can simplify:
{code}
+  if (!regions.containsKey(region) || (serverName = regions.get(region)) 
== null) {
{code}
To
{code}
+  if ((serverName = regions.get(region)) == null) {
{code}
{quote}
Yes, it looks better, I will update the patch.

{quote}
Can we move:
{code}
+ServerName serverName = null;
{code}
Inside the synchronized?
{quote}

Do you mean move it in the synchronized:
{code}
synchronized (this.regions) {
   // Check if this region is currently assigned
   if ((serverName = regions.get(region)) == null) {
   ...
{code}
However, the serverName will be used in another synchronized:
{code}
synchronized (regionsInTransition) {
  state = regionsInTransition.get(encodedName);
   synchronized (regionsInTransition) {
  state = regionsInTransition.get(encodedName);
  if (state == null) {
 // Create the znode in CLOSING state
try {
  versionOfClosingNode = ZKAssign.createNodeClosing(
master.getZooKeeper(), region, serverName); // === need to be used 
here
  
{code}
so that the serverName is not visible if it is defined in the first 
synchronized?

 AssignmentManager does not write the correct server name into Zookeeper when 
 unassign region
 

 Key: HBASE-13814
 URL: https://issues.apache.org/jira/browse/HBASE-13814
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.94.27
Reporter: cuijianwei
Priority: Minor
 Attachments: HBASE-13814-0.94-v1.patch


 When moving region, the region will firstly be unassigned from corresponding 
 region server by the method AssignmentManager#unassign(). AssignmentManager 
 will write the region info and the server name into Zookeeper by the 
 following code:
 {code}
   versionOfClosingNode = ZKAssign.createNodeClosing(
 master.getZooKeeper(), region, master.getServerName());
 {code}
 It seems that the AssignmentManager misuses the master's name as the server 
 name. If the ROOT region is being moved and the region server holding the 
 ROOT region is just crashed. The Master will try to start a 
 MetaServerShutdownHandler if the server is judged as holding meta region. The 
 judgment will be done by the method AssignmentManager#isCarryingRegion, and 
 the method will firstly check the server name in Zookeeper:
 {code}
 ServerName addressFromZK = (data != null  data.getOrigin() != null) ?
   data.getOrigin() : null;
 if (addressFromZK != null) {
   // if we get something from ZK, we will use the data
   boolean matchZK = (addressFromZK != null 
 addressFromZK.equals(serverName));
 {code}
 The wrong server name from Zookeeper will make the server not be judged as 
 holding the ROOT region. Then, the master will start a ServerShutdownHandler. 
 Unlike MetaServerShutdownHandler, the ServerShutdownHandler won't assign ROOT 
 region firstly, making the ROOT region won't be assigned forever. In our test 
 environment, we encounter this problem when moving ROOT region and stopping 
 the region server concurrently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13961) SnapshotManager#initialize should set snapshotLayoutVersion if it allows to create snapshot with old layout format

2015-06-24 Thread cuijianwei (JIRA)
cuijianwei created HBASE-13961:
--

 Summary: SnapshotManager#initialize should set 
snapshotLayoutVersion if it allows to create snapshot with old layout format
 Key: HBASE-13961
 URL: https://issues.apache.org/jira/browse/HBASE-13961
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.13
Reporter: cuijianwei
Priority: Minor


In 0.98, it seems the snapshot layout version could be configured by 
hbase.snapshot.format.version. However, SnapshotManager does not set 
snapshotLayoutVersion in its initialize(...) method so that it always creates 
snapshot with latest layout format even when hbase.snapshot.format.version is 
set to SnapshotManifestV1.DESCRIPTOR_VERSION in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12345) Unsafe based Comparator for BB

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599311#comment-14599311
 ] 

Hadoop QA commented on HBASE-12345:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741588/HBASE-12345_V3.patch
  against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741588

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.3.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14541//console

This message is automatically generated.

 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch, HBASE-12345_V2.patch, 
 HBASE-12345_V3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13961) SnapshotManager#initialize should set snapshotLayoutVersion if it allows to create snapshot with old layout format

2015-06-24 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-13961:

Status: Patch Available  (was: Open)

 SnapshotManager#initialize should set snapshotLayoutVersion if it allows to 
 create snapshot with old layout format
 --

 Key: HBASE-13961
 URL: https://issues.apache.org/jira/browse/HBASE-13961
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.13
Reporter: cuijianwei
Priority: Minor
 Attachments: HBASE-13961-0.98-v1.patch


 In 0.98, it seems the snapshot layout version could be configured by 
 hbase.snapshot.format.version. However, SnapshotManager does not set 
 snapshotLayoutVersion in its initialize(...) method so that it always creates 
 snapshot with latest layout format even when hbase.snapshot.format.version 
 is set to SnapshotManifestV1.DESCRIPTOR_VERSION in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13961) SnapshotManager#initialize should set snapshotLayoutVersion if it allows to create snapshot with old layout format

2015-06-24 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599329#comment-14599329
 ] 

Matteo Bertozzi commented on HBASE-13961:
-

+1

 SnapshotManager#initialize should set snapshotLayoutVersion if it allows to 
 create snapshot with old layout format
 --

 Key: HBASE-13961
 URL: https://issues.apache.org/jira/browse/HBASE-13961
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.13
Reporter: cuijianwei
Priority: Minor
 Attachments: HBASE-13961-0.98-v1.patch


 In 0.98, it seems the snapshot layout version could be configured by 
 hbase.snapshot.format.version. However, SnapshotManager does not set 
 snapshotLayoutVersion in its initialize(...) method so that it always creates 
 snapshot with latest layout format even when hbase.snapshot.format.version 
 is set to SnapshotManifestV1.DESCRIPTOR_VERSION in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13961) SnapshotManager#initialize should set snapshotLayoutVersion if it allows to create snapshot with old layout format

2015-06-24 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-13961:
---
Attachment: HBASE-13961-0.98-v1.patch

 SnapshotManager#initialize should set snapshotLayoutVersion if it allows to 
 create snapshot with old layout format
 --

 Key: HBASE-13961
 URL: https://issues.apache.org/jira/browse/HBASE-13961
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.13
Reporter: cuijianwei
Priority: Minor
 Attachments: HBASE-13961-0.98-v1.patch


 In 0.98, it seems the snapshot layout version could be configured by 
 hbase.snapshot.format.version. However, SnapshotManager does not set 
 snapshotLayoutVersion in its initialize(...) method so that it always creates 
 snapshot with latest layout format even when hbase.snapshot.format.version 
 is set to SnapshotManifestV1.DESCRIPTOR_VERSION in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12345) Unsafe based Comparator for BB

2015-06-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12345:
---
Attachment: HBASE-12345_V3.patch

Patch V3 with bit more optimization as was done in Bytes unsafe compare logic.  
Only at time of comparison the reverse bytes op is done. When bytes are same no 
need for this op at all.
Also added clear doc in UnsafeUtil APIs that we treat bytes in big endian 
format.

 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch, HBASE-12345_V2.patch, 
 HBASE-12345_V3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8642) [Snapshot] List and delete snapshot by table

2015-06-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-8642:
-
Attachment: HBASE-8642.patch

 [Snapshot] List and delete snapshot by table
 

 Key: HBASE-8642
 URL: https://issues.apache.org/jira/browse/HBASE-8642
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2
Reporter: Julian Zhou
Assignee: Julian Zhou
Priority: Minor
 Fix For: 2.0.0

 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 
 8642-trunk-0.95-v2.patch, HBASE-8642.patch


 Support list and delete snapshot by table name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13814) AssignmentManager does not write the correct server name into Zookeeper when unassign region

2015-06-24 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-13814:
---
Attachment: HBASE-13814-0.94-v2.patch

 AssignmentManager does not write the correct server name into Zookeeper when 
 unassign region
 

 Key: HBASE-13814
 URL: https://issues.apache.org/jira/browse/HBASE-13814
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.94.27
Reporter: cuijianwei
Priority: Minor
 Attachments: HBASE-13814-0.94-v1.patch, HBASE-13814-0.94-v2.patch


 When moving region, the region will firstly be unassigned from corresponding 
 region server by the method AssignmentManager#unassign(). AssignmentManager 
 will write the region info and the server name into Zookeeper by the 
 following code:
 {code}
   versionOfClosingNode = ZKAssign.createNodeClosing(
 master.getZooKeeper(), region, master.getServerName());
 {code}
 It seems that the AssignmentManager misuses the master's name as the server 
 name. If the ROOT region is being moved and the region server holding the 
 ROOT region is just crashed. The Master will try to start a 
 MetaServerShutdownHandler if the server is judged as holding meta region. The 
 judgment will be done by the method AssignmentManager#isCarryingRegion, and 
 the method will firstly check the server name in Zookeeper:
 {code}
 ServerName addressFromZK = (data != null  data.getOrigin() != null) ?
   data.getOrigin() : null;
 if (addressFromZK != null) {
   // if we get something from ZK, we will use the data
   boolean matchZK = (addressFromZK != null 
 addressFromZK.equals(serverName));
 {code}
 The wrong server name from Zookeeper will make the server not be judged as 
 holding the ROOT region. Then, the master will start a ServerShutdownHandler. 
 Unlike MetaServerShutdownHandler, the ServerShutdownHandler won't assign ROOT 
 region firstly, making the ROOT region won't be assigned forever. In our test 
 environment, we encounter this problem when moving ROOT region and stopping 
 the region server concurrently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13960) HConnection stuck with UnknownHostException

2015-06-24 Thread Kurt Young (JIRA)
Kurt Young created HBASE-13960:
--

 Summary: HConnection stuck with UnknownHostException 
 Key: HBASE-13960
 URL: https://issues.apache.org/jira/browse/HBASE-13960
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.8
Reporter: Kurt Young


when put/get from hbase, if we meet a temporary dns failure causes resolve RS's 
host, the error will never recovered. put/get will failed with 
UnknownHostException forever. 

I checked the code, and the reason maybe:
1. when RegionServerCallable or MultiServerCallable prepare(), it gets a  
ClientService.BlockingInterface stub from Hconnection
2. In HConnectionImplementation::getClient, it caches the stub with a 
BlockingRpcChannelImplementation
3. In BlockingRpcChannelImplementation(), 
 this.isa = new InetSocketAddress(sn.getHostname(), sn.getPort()); If we 
meet a  temporary dns failure then the address in isa will be null.
4. then we launch the real rpc call, the following stack is:
Caused by: java.net.UnknownHostException: unknown host: 
r101072047.sqa.zmf.tbsite.net
at 
org.apache.hadoop.hbase.ipc.RpcClient$Connection.init(RpcClient.java:385)
at 
org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:351)
at 
org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1523)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1654)
at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1712)

Besides, i noticed there is a protection in RpcClient:
if (remoteId.getAddress().isUnresolved()) {
throw new UnknownHostException(unknown host:  + 
remoteId.getAddress().getHostName());
  }
shouldn't we do something when this situation occurred? 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13863) Multi-wal feature breaks reported number and size of HLogs

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600101#comment-14600101
 ] 

Hadoop QA commented on HBASE-13863:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741666/HBASE-13863-v1.patch
  against master branch at commit 578adca6ee961f558cd2b2246156f9822cf4f7a2.
  ATTACHMENT ID: 12741666

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat
  org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFailure(TestExportSnapshot.java:317)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14552//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14552//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14552//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14552//console

This message is automatically generated.

 Multi-wal feature breaks reported number and size of HLogs
 --

 Key: HBASE-13863
 URL: https://issues.apache.org/jira/browse/HBASE-13863
 Project: HBase
  Issue Type: Bug
  Components: regionserver, UI
Reporter: Elliott Clark
Assignee: Abhilash
 Attachments: HBASE-13863-v1.patch, HBASE-13863-v1.patch, 
 HBASE-13863-v1.patch, HBASE-13863.patch


 When multi-wal is enabled the number and size of retained HLogs is always 
 reported as zero.
 We should fix this so that the numbers are the sum of all retained logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13965) Stochastic Load Balancer JMX Metrics

2015-06-24 Thread Lei Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chen reassigned HBASE-13965:


Assignee: Lei Chen

 Stochastic Load Balancer JMX Metrics
 

 Key: HBASE-13965
 URL: https://issues.apache.org/jira/browse/HBASE-13965
 Project: HBase
  Issue Type: Improvement
  Components: Balancer, metrics
Reporter: Lei Chen
Assignee: Lei Chen

 Today’s default HBase load balancer (the Stochastic load balancer) is cost 
 function based. The cost function weights are tunable but no visibility into 
 those cost function results is directly provided.
 A driving example is a cluster we have been tuning which has skewed rack size 
 (one rack has half the nodes of the other few racks). We are tuning the 
 cluster for uniform response time from all region servers with the ability to 
 tolerate a rack failure. Balancing LocalityCost, RegionReplicaRack Cost and 
 RegionCountSkew Cost is difficult without a way to attribute each cost 
 function’s contribution to overall cost. 
 What this jira proposes is to provide visibility via JMX into each cost 
 function of the stochastic load balancer, as well as the overall cost of the 
 balancing plan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13958) RESTApiClusterManager calls kill() instead of suspend() and resume()

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600276#comment-14600276
 ] 

Hudson commented on HBASE-13958:


FAILURE: Integrated in HBase-1.0 #974 (See 
[https://builds.apache.org/job/HBase-1.0/974/])
HBASE-13958 RESTApiClusterManager calls kill() instead of suspend() and 
resume() (matteo.bertozzi: rev f444d45b7678472a96ec422324fc75367c4a699b)
* hbase-it/src/test/java/org/apache/hadoop/hbase/RESTApiClusterManager.java


 RESTApiClusterManager calls kill() instead of suspend() and resume()
 

 Key: HBASE-13958
 URL: https://issues.apache.org/jira/browse/HBASE-13958
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Affects Versions: 2.0.0, 1.2.0, 1.1.0.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13958-v0.patch


 suspend() and resume() of the REST ClusterManager are calling the wrong 
 method.
 {code}
   @Override
   public void suspend(ServiceType service, String hostname, int port) throws 
 IOException {
 hBaseClusterManager.kill(service, hostname, port);
   }
   @Override
   public void resume(ServiceType service, String hostname, int port) throws 
 IOException {
 hBaseClusterManager.kill(service, hostname, port);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600305#comment-14600305
 ] 

Mikhail Antonov commented on HBASE-13964:
-

I assume table.getNamespaceAsString() never returns null, so we don't need to 
check before this call? I'd add debug logging if table was rejected for this 
reason. +1 for the patch with logging.

bq. I think this is outside the scope of this JIRA since big change in Quota 
manager is involved.
I agree. Just wanted to gather feedback on this idea as possible further step.

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600307#comment-14600307
 ] 

Hadoop QA commented on HBASE-13964:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741702/13964-v1.txt
  against master branch at commit 2df3236a4eee48bf723213a7c4ff3d29c832c8cf.
  ATTACHMENT ID: 12741702

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14554//console

This message is automatically generated.

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13214) Remove deprecated and unused methods from HTable class

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600309#comment-14600309
 ] 

Hadoop QA commented on HBASE-13214:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741690/HBASE-13214-v3.patch
  against master branch at commit 578adca6ee961f558cd2b2246156f9822cf4f7a2.
  ATTACHMENT ID: 12741690

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 126 
new or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14553//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14553//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14553//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14553//console

This message is automatically generated.

 Remove deprecated and unused methods from HTable class
 --

 Key: HBASE-13214
 URL: https://issues.apache.org/jira/browse/HBASE-13214
 Project: HBase
  Issue Type: Sub-task
  Components: API
Affects Versions: 2.0.0
Reporter: Mikhail Antonov
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13214-v1.patch, HBASE-13214-v2-again-v1.patch, 
 HBASE-13214-v2-again.patch, HBASE-13214-v2.patch, HBASE-13214-v3.patch, 
 HBASE-13214-v3.patch, HBASE-13214.patch


 Methods like #getRegionLocation(), #isTableEnabled() etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13864) HColumnDescriptor should parse the output from master and from describe for ttl

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600086#comment-14600086
 ] 

Hadoop QA commented on HBASE-13864:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12740267/HBASE-13864-3.patch
  against master branch at commit 578adca6ee961f558cd2b2246156f9822cf4f7a2.
  ATTACHMENT ID: 12740267

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery
  org.apache.hadoop.hbase.TestRegionRebalancing
  org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.phoenix.mapreduce.IndexToolIT.testSecondaryIndex(IndexToolIT.java:131)
at 
org.apache.phoenix.mapreduce.IndexToolIT.testMutableLocalIndex(IndexToolIT.java:89)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14550//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14550//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14550//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14550//console

This message is automatically generated.

 HColumnDescriptor should parse the output from master and from describe for 
 ttl
 ---

 Key: HBASE-13864
 URL: https://issues.apache.org/jira/browse/HBASE-13864
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Elliott Clark
Assignee: Ashu Pachauri
 Attachments: HBASE-13864-1.patch, HBASE-13864-2.patch, 
 HBASE-13864-3.patch, HBASE-13864.patch


 The TTL printing on HColumnDescriptor adds a human readable time. When using 
 that string for the create command it throws an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13214) Remove deprecated and unused methods from HTable class

2015-06-24 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600129#comment-14600129
 ] 

Nick Dimiduk commented on HBASE-13214:
--

Went through the patch, nice bit of cleanup. I'd feel better with a clean 
buildbot run, but those seem to be hard to come by these days. Let's see how 
this next one goes, and please [~ashish singhi] do keep an eye on them after, 
but yeah, looks good to me. +1

[~stack] [~enis] [~sduskis] you folks care to weigh in here?

 Remove deprecated and unused methods from HTable class
 --

 Key: HBASE-13214
 URL: https://issues.apache.org/jira/browse/HBASE-13214
 Project: HBase
  Issue Type: Sub-task
  Components: API
Affects Versions: 2.0.0
Reporter: Mikhail Antonov
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13214-v1.patch, HBASE-13214-v2-again-v1.patch, 
 HBASE-13214-v2-again.patch, HBASE-13214-v2.patch, HBASE-13214-v3.patch, 
 HBASE-13214-v3.patch, HBASE-13214.patch


 Methods like #getRegionLocation(), #isTableEnabled() etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13964:
---
Status: Patch Available  (was: Open)

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-13964:
--

Assignee: Ted Yu

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13964:
---
Summary: Region normalization for tables under namespace quota  (was: 
Region normalization for tables under namespace quote)

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13964) Region normalization for tables under namespace quote

2015-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600177#comment-14600177
 ] 

Ted Yu commented on HBASE-13964:


bq. would you propose to add an additional guard for that?
See attached patch for the guard.

bq. based on some storage-level metrics, like total region size in bytes on HDFS
I think this is outside the scope of this JIRA since big change in Quota 
manager is involved.

 Region normalization for tables under namespace quote
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13964) Region normalization for tables under namespace quote

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13964:
---
Attachment: 13964-v1.txt

 Region normalization for tables under namespace quote
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13835) KeyValueHeap.current might be in heap when exception happens in pollRealKV

2015-06-24 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600192#comment-14600192
 ] 

Nick Dimiduk commented on HBASE-13835:
--

+1

 KeyValueHeap.current might be in heap when exception happens in pollRealKV
 --

 Key: HBASE-13835
 URL: https://issues.apache.org/jira/browse/HBASE-13835
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HBASE-13835-001.patch, HBASE-13835-002.patch, 
 HBASE-13835-002.patch, HBASE-13835-branch1-001.patch, 
 HBASE-13835_branch-1.patch


 In a 0.94 hbase cluster, we found a NPE with following stack:
 {code}
 Exception in thread regionserver21600.leaseChecker 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1530)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:201)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:191)
 at 
 java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641)
 at java.util.PriorityQueue.siftDown(PriorityQueue.java:612)
 at java.util.PriorityQueue.poll(PriorityQueue.java:523)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:241)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:355)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:237)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:4302)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer$ScannerListener.leaseExpired(HRegionServer.java:3033)
 at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:119)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Before this NPE exception, there is an exception happens in pollRealKV, which 
 we think is the culprit of the NPE.
 {code}
 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
 java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
 reader reader=
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:180)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:371)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:366)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:116)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:455)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4124)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4196)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4067)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4057)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.internalNext(HRegionServer.java:2898)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2833)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2815)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:337)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1583)
 {code}
 Simply put, if there is an exception happens in pollRealKV( ), the 
 KeyValueHeap.current might be in heap. Later on, when KeyValueHeap.close( ) 
 is called, the current would be closed firstly. However, since it might still 
 be in the heap, it would either be closed again or its peek() (which is null 
 after it is closed) is called by the heap's poll().  Neither case is expected.
 Although it is caught in 0.94, it is still in the trunk from the code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13958) RESTApiClusterManager calls kill() instead of suspend() and resume()

2015-06-24 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-13958:

Affects Version/s: 1.0.1.1
Fix Version/s: 1.0.2

 RESTApiClusterManager calls kill() instead of suspend() and resume()
 

 Key: HBASE-13958
 URL: https://issues.apache.org/jira/browse/HBASE-13958
 Project: HBase
  Issue Type: Bug
  Components: integration tests
Affects Versions: 2.0.0, 1.2.0, 1.0.1.1, 1.1.0.1
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Minor
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0

 Attachments: HBASE-13958-v0.patch


 suspend() and resume() of the REST ClusterManager are calling the wrong 
 method.
 {code}
   @Override
   public void suspend(ServiceType service, String hostname, int port) throws 
 IOException {
 hBaseClusterManager.kill(service, hostname, port);
   }
   @Override
   public void resume(ServiceType service, String hostname, int port) throws 
 IOException {
 hBaseClusterManager.kill(service, hostname, port);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13948) Expand hadoop2 versions built on the pre-commit

2015-06-24 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13948:
-
Attachment: HBASE-13948.01.patch

Okay, let's try again. This is what I pushed. Thanks for keeping me honest 
[~busbey], [~te...@apache.org].

 Expand hadoop2 versions built on the pre-commit
 ---

 Key: HBASE-13948
 URL: https://issues.apache.org/jira/browse/HBASE-13948
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: 13948.patch, HBASE-13948-addendum.patch, 
 HBASE-13948.01.patch


 For the HBase 1.1 line I've been validating builds against the following 
 hadoop versions: 2.2.0 2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0. Let's 
 do the same in pre-commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13948) Expand hadoop2 versions built on the pre-commit

2015-06-24 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-13948.
--
Resolution: Fixed

 Expand hadoop2 versions built on the pre-commit
 ---

 Key: HBASE-13948
 URL: https://issues.apache.org/jira/browse/HBASE-13948
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: 13948.patch, HBASE-13948-addendum.patch, 
 HBASE-13948.01.patch


 For the HBase 1.1 line I've been validating builds against the following 
 hadoop versions: 2.2.0 2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0. Let's 
 do the same in pre-commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13214) Remove deprecated and unused methods from HTable class

2015-06-24 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600138#comment-14600138
 ] 

Mikhail Antonov commented on HBASE-13214:
-

Went thru comments on RB (thanks [~anoop.hbase]!) and replies to them, last 
version of patch, looks good to me too. +1.

Let's link in here additional jiras which were created based on comments. From 
RB comments: 

bq. This can be removed? BufferedMutator is @since 1.0.0

Was jira filed for that? Looks like consensus was to create one.


 Remove deprecated and unused methods from HTable class
 --

 Key: HBASE-13214
 URL: https://issues.apache.org/jira/browse/HBASE-13214
 Project: HBase
  Issue Type: Sub-task
  Components: API
Affects Versions: 2.0.0
Reporter: Mikhail Antonov
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: HBASE-13214-v1.patch, HBASE-13214-v2-again-v1.patch, 
 HBASE-13214-v2-again.patch, HBASE-13214-v2.patch, HBASE-13214-v3.patch, 
 HBASE-13214-v3.patch, HBASE-13214.patch


 Methods like #getRegionLocation(), #isTableEnabled() etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13961) SnapshotManager#initialize should set snapshotLayoutVersion if it allows to create snapshot with old layout format

2015-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600240#comment-14600240
 ] 

Ted Yu commented on HBASE-13961:


+1

 SnapshotManager#initialize should set snapshotLayoutVersion if it allows to 
 create snapshot with old layout format
 --

 Key: HBASE-13961
 URL: https://issues.apache.org/jira/browse/HBASE-13961
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.13
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Attachments: HBASE-13961-0.98-v1.patch


 In 0.98, it seems the snapshot layout version could be configured by 
 hbase.snapshot.format.version. However, SnapshotManager does not set 
 snapshotLayoutVersion in its initialize(...) method so that it always creates 
 snapshot with latest layout format even when hbase.snapshot.format.version 
 is set to SnapshotManifestV1.DESCRIPTOR_VERSION in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13961) SnapshotManager#initialize should set snapshotLayoutVersion if it allows to create snapshot with old layout format

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-13961:
--

Assignee: cuijianwei

 SnapshotManager#initialize should set snapshotLayoutVersion if it allows to 
 create snapshot with old layout format
 --

 Key: HBASE-13961
 URL: https://issues.apache.org/jira/browse/HBASE-13961
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.13
Reporter: cuijianwei
Assignee: cuijianwei
Priority: Minor
 Attachments: HBASE-13961-0.98-v1.patch


 In 0.98, it seems the snapshot layout version could be configured by 
 hbase.snapshot.format.version. However, SnapshotManager does not set 
 snapshotLayoutVersion in its initialize(...) method so that it always creates 
 snapshot with latest layout format even when hbase.snapshot.format.version 
 is set to SnapshotManifestV1.DESCRIPTOR_VERSION in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13948) Expand hadoop2 versions built on the pre-commit

2015-06-24 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600107#comment-14600107
 ] 

Enis Soztutar commented on HBASE-13948:
---

Yes, master branch does not build with hadoop-2.2 and 2.3. I think it is fine 
that we broke the build for these ancient versions in hbase-2.0. All 1.x 
branches should build with 2.2+ though. 

 Expand hadoop2 versions built on the pre-commit
 ---

 Key: HBASE-13948
 URL: https://issues.apache.org/jira/browse/HBASE-13948
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: 13948.patch, HBASE-13948-addendum.patch


 For the HBase 1.1 line I've been validating builds against the following 
 hadoop versions: 2.2.0 2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0. Let's 
 do the same in pre-commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13948) Expand hadoop2 versions built on the pre-commit

2015-06-24 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600117#comment-14600117
 ] 

Sean Busbey commented on HBASE-13948:
-

let's go ahead and add the minor versions and the additional 2.7 build then. 
once we can update to use a release of the new test-patch we can customize 
things to do 2.2 and 2.3 for branch-1.

 Expand hadoop2 versions built on the pre-commit
 ---

 Key: HBASE-13948
 URL: https://issues.apache.org/jira/browse/HBASE-13948
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: 13948.patch, HBASE-13948-addendum.patch


 For the HBase 1.1 line I've been validating builds against the following 
 hadoop versions: 2.2.0 2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0. Let's 
 do the same in pre-commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13965) Stochastic Load Balancer JMX Metrics

2015-06-24 Thread Lei Chen (JIRA)
Lei Chen created HBASE-13965:


 Summary: Stochastic Load Balancer JMX Metrics
 Key: HBASE-13965
 URL: https://issues.apache.org/jira/browse/HBASE-13965
 Project: HBase
  Issue Type: Improvement
  Components: Balancer, metrics
Reporter: Lei Chen


Today’s default HBase load balancer (the Stochastic load balancer) is cost 
function based. The cost function weights are tunable but no visibility into 
those cost function results is directly provided.

A driving example is a cluster we have been tuning which has skewed rack size 
(one rack has half the nodes of the other few racks). We are tuning the cluster 
for uniform response time from all region servers with the ability to tolerate 
a rack failure. Balancing LocalityCost, RegionReplicaRack Cost and 
RegionCountSkew Cost is difficult without a way to attribute each cost 
function’s contribution to overall cost. 
What this jira proposes is to provide visibility via JMX into each cost 
function of the stochastic load balancer, as well as the overall cost of the 
balancing plan.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13965) Stochastic Load Balancer JMX Metrics

2015-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600285#comment-14600285
 ] 

Ted Yu commented on HBASE-13965:


This is very useful feature.

 Stochastic Load Balancer JMX Metrics
 

 Key: HBASE-13965
 URL: https://issues.apache.org/jira/browse/HBASE-13965
 Project: HBase
  Issue Type: Improvement
  Components: Balancer, metrics
Reporter: Lei Chen
Assignee: Lei Chen

 Today’s default HBase load balancer (the Stochastic load balancer) is cost 
 function based. The cost function weights are tunable but no visibility into 
 those cost function results is directly provided.
 A driving example is a cluster we have been tuning which has skewed rack size 
 (one rack has half the nodes of the other few racks). We are tuning the 
 cluster for uniform response time from all region servers with the ability to 
 tolerate a rack failure. Balancing LocalityCost, RegionReplicaRack Cost and 
 RegionCountSkew Cost is difficult without a way to attribute each cost 
 function’s contribution to overall cost. 
 What this jira proposes is to provide visibility via JMX into each cost 
 function of the stochastic load balancer, as well as the overall cost of the 
 balancing plan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-8642) [Snapshot] List and delete snapshot by table

2015-06-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-8642:
-
   Assignee: Ashish Singhi  (was: Julian Zhou)
   Priority: Major  (was: Minor)
Description: 
Support list and delete snapshots by table names.
User scenario:
A user wants to delete all the snapshots which were taken in January month for 
a table 't' where snapshot names starts with 'Jan'.

  was:Support list and delete snapshot by table name.


updated the description with user scenario.

 [Snapshot] List and delete snapshot by table
 

 Key: HBASE-8642
 URL: https://issues.apache.org/jira/browse/HBASE-8642
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2
Reporter: Julian Zhou
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 
 8642-trunk-0.95-v2.patch, HBASE-8642.patch


 Support list and delete snapshots by table names.
 User scenario:
 A user wants to delete all the snapshots which were taken in January month 
 for a table 't' where snapshot names starts with 'Jan'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13835) KeyValueHeap.current might be in heap when exception happens in pollRealKV

2015-06-24 Thread zhouyingchao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HBASE-13835:
-
Attachment: HBASE-13835-branch1-001.patch

Test TestKeyValueHeap

 KeyValueHeap.current might be in heap when exception happens in pollRealKV
 --

 Key: HBASE-13835
 URL: https://issues.apache.org/jira/browse/HBASE-13835
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HBASE-13835-001.patch, HBASE-13835-002.patch, 
 HBASE-13835-002.patch, HBASE-13835-branch1-001.patch


 In a 0.94 hbase cluster, we found a NPE with following stack:
 {code}
 Exception in thread regionserver21600.leaseChecker 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1530)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:201)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:191)
 at 
 java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641)
 at java.util.PriorityQueue.siftDown(PriorityQueue.java:612)
 at java.util.PriorityQueue.poll(PriorityQueue.java:523)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:241)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:355)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:237)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:4302)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer$ScannerListener.leaseExpired(HRegionServer.java:3033)
 at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:119)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Before this NPE exception, there is an exception happens in pollRealKV, which 
 we think is the culprit of the NPE.
 {code}
 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
 java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
 reader reader=
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:180)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:371)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:366)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:116)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:455)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4124)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4196)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4067)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4057)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.internalNext(HRegionServer.java:2898)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2833)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2815)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:337)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1583)
 {code}
 Simply put, if there is an exception happens in pollRealKV( ), the 
 KeyValueHeap.current might be in heap. Later on, when KeyValueHeap.close( ) 
 is called, the current would be closed firstly. However, since it might still 
 be in the heap, it would either be closed again or its peek() (which is null 
 after it is closed) is called by the heap's poll().  Neither case is expected.
 Although it is caught in 0.94, it is still in the trunk from the code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more day after they are expired

2015-06-24 Thread Gururaj Shetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gururaj Shetty updated HBASE-13670:
---
Attachment: HBASE-13670.patch

 [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more 
 day after they are expired
 --

 Key: HBASE-13670
 URL: https://issues.apache.org/jira/browse/HBASE-13670
 Project: HBase
  Issue Type: Improvement
  Components: documentation, mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
Assignee: Gururaj Shetty
 Fix For: hbase-11339

 Attachments: HBASE-13670.patch


 Currently the ExpiredMobFileCleaner cleans the expired mob file according to 
 the date in the mob file name. The minimum unit of the date is day. So 
 ExpiredMobFileCleaner only cleans the expired mob files later for one more 
 day after they are expired. We need to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more day after they are expired

2015-06-24 Thread Gururaj Shetty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gururaj Shetty updated HBASE-13670:
---
Status: Patch Available  (was: Open)

 [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more 
 day after they are expired
 --

 Key: HBASE-13670
 URL: https://issues.apache.org/jira/browse/HBASE-13670
 Project: HBase
  Issue Type: Improvement
  Components: documentation, mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
Assignee: Gururaj Shetty
 Fix For: hbase-11339

 Attachments: HBASE-13670.patch


 Currently the ExpiredMobFileCleaner cleans the expired mob file according to 
 the date in the mob file name. The minimum unit of the date is day. So 
 ExpiredMobFileCleaner only cleans the expired mob files later for one more 
 day after they are expired. We need to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13960) HConnection stuck with UnknownHostException

2015-06-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599445#comment-14599445
 ] 

stack commented on HBASE-13960:
---

Yes. What would you suggest [~ykt836] ? Regetting the stub is a bit tough. We 
should probe to make sure the ISA is resolved before we finish the stub setup?  
Thanks.

 HConnection stuck with UnknownHostException 
 

 Key: HBASE-13960
 URL: https://issues.apache.org/jira/browse/HBASE-13960
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.8
Reporter: Kurt Young

 when put/get from hbase, if we meet a temporary dns failure causes resolve 
 RS's host, the error will never recovered. put/get will failed with 
 UnknownHostException forever. 
 I checked the code, and the reason maybe:
 1. when RegionServerCallable or MultiServerCallable prepare(), it gets a  
 ClientService.BlockingInterface stub from Hconnection
 2. In HConnectionImplementation::getClient, it caches the stub with a 
 BlockingRpcChannelImplementation
 3. In BlockingRpcChannelImplementation(), 
  this.isa = new InetSocketAddress(sn.getHostname(), sn.getPort()); If we 
 meet a  temporary dns failure then the address in isa will be null.
 4. then we launch the real rpc call, the following stack is:
 Caused by: java.net.UnknownHostException: unknown host: 
 r101072047.sqa.zmf.tbsite.net
   at 
 org.apache.hadoop.hbase.ipc.RpcClient$Connection.init(RpcClient.java:385)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:351)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1523)
   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1654)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1712)
 Besides, i noticed there is a protection in RpcClient:
 if (remoteId.getAddress().isUnresolved()) {
 throw new UnknownHostException(unknown host:  + 
 remoteId.getAddress().getHostName());
   }
 shouldn't we do something when this situation occurred? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more day after they are expired

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599443#comment-14599443
 ] 

Hadoop QA commented on HBASE-13670:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741610/HBASE-13670.patch
  against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741610

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14544//console

This message is automatically generated.

 [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more 
 day after they are expired
 --

 Key: HBASE-13670
 URL: https://issues.apache.org/jira/browse/HBASE-13670
 Project: HBase
  Issue Type: Improvement
  Components: documentation, mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
Assignee: Gururaj Shetty
 Fix For: hbase-11339

 Attachments: HBASE-13670.patch


 Currently the ExpiredMobFileCleaner cleans the expired mob file according to 
 the date in the mob file name. The minimum unit of the date is day. So 
 ExpiredMobFileCleaner only cleans the expired mob files later for one more 
 day after they are expired. We need to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8642) [Snapshot] List and delete snapshot by table

2015-06-24 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599398#comment-14599398
 ] 

Ashish Singhi commented on HBASE-8642:
--

Added two commands {{list_table_snapshots}} and {{delete_table_snapshots}} 
which takes table name and snapshot name. For shell command snapshot name is 
optional, if it is not passed then all the snapshots matching the table name 
regex will be listed (and eligible for delete if executing 
delete_table_snapshots command).
bq. A user wants to delete all the snapshots which were taken in January month 
for a table 't' where snapshot names starts with 'Jan'.
This can be done now by executing below command
{code}hbase(main):028:0 delete_table_snapshots 't', 'Jan.*'{code}
Verified shell commands manually everything seems ok.

Will move this jira to Patch Available state once Hadoop QA for master branch 
becomes ok.

Please review.

 [Snapshot] List and delete snapshot by table
 

 Key: HBASE-8642
 URL: https://issues.apache.org/jira/browse/HBASE-8642
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.0, 0.95.0, 0.95.1, 0.95.2
Reporter: Julian Zhou
Assignee: Ashish Singhi
 Fix For: 2.0.0

 Attachments: 8642-trunk-0.95-v0.patch, 8642-trunk-0.95-v1.patch, 
 8642-trunk-0.95-v2.patch, HBASE-8642.patch


 Support list and delete snapshots by table names.
 User scenario:
 A user wants to delete all the snapshots which were taken in January month 
 for a table 't' where snapshot names starts with 'Jan'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13893) Replace HTable with Table in client tests

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599484#comment-14599484
 ] 

Hadoop QA commented on HBASE-13893:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12741613/HBASE-13893-v5%20%281%29.patch
  against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741613

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 338 
new or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.3.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-server: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[75,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2064,17]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[42,30]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java:[92,11]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class 
org.apache.hadoop.hbase.regionserver.MetricsRegionServerWrapperImpl
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java:[2086,15]
 cannot find symbol
[ERROR] symbol:   class DFSHedgedReadMetrics
[ERROR] location: class org.apache.hadoop.hbase.util.FSUtils
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hbase-server


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14545//console

This message is automatically generated.

 Replace HTable with Table in client tests
 -

 Key: HBASE-13893
 URL: https://issues.apache.org/jira/browse/HBASE-13893
 Project: HBase
  Issue Type: Bug
  Components: Client, test
Reporter: Jurriaan Mous
Assignee: Jurriaan Mous
 Attachments: HBASE-13893-v1.patch, HBASE-13893-v2.patch, 
 HBASE-13893-v3.patch, HBASE-13893-v3.patch, HBASE-13893-v4.patch, 
 HBASE-13893-v5 (1).patch, HBASE-13893-v5 (1).patch, HBASE-13893-v5 (1).patch, 
 HBASE-13893-v5.patch, HBASE-13893-v5.patch, HBASE-13893-v5.patch, 
 HBASE-13893.patch


 Many client tests reference the HTable implementation instead of the generic 
 Table implementation. It is now not possible to reuse the tests for another 
 Table implementation. This issue focusses on all HTable instances in relevant 
 client tests and thus not all HTable instances.



--

[jira] [Updated] (HBASE-11085) Incremental Backup Restore support

2015-06-24 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-11085:
--
Status: Patch Available  (was: Open)

Build issue must be resolved now. Lets try one more time.

 Incremental Backup Restore support
 --

 Key: HBASE-11085
 URL: https://issues.apache.org/jira/browse/HBASE-11085
 Project: HBase
  Issue Type: New Feature
Reporter: Demai Ni
Assignee: Vladimir Rodionov
 Attachments: 
 HBASE-11085-trunk-contains-HBASE-10900-trunk-latest.patch, 
 HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v1.patch, 
 HBASE-11085-trunk-v2-contain-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v2.patch, HLogPlayer.java


 h2. Feature Description
 the jira is part of  
 [HBASE-7912|https://issues.apache.org/jira/browse/HBASE-7912], and depend on 
 full backup [HBASE-10900| https://issues.apache.org/jira/browse/HBASE-10900]. 
 for the detail layout and frame work, please reference to  [HBASE-10900| 
 https://issues.apache.org/jira/browse/HBASE-10900].
 When client issues an incremental backup request, BackupManager will check 
 the request and then kicks of a global procedure via HBaseAdmin for all the 
 active regionServer to roll log. Each Region server will record their log 
 number into zookeeper. Then we determine which log need to be included in 
 this incremental backup, and use DistCp to copy them to target location. At 
 the same time, a dependency of backup image will be recorded, and later on 
 saved in Backup Manifest file.
 Restore is to replay the backuped WAL logs on target HBase instance. The 
 replay will occur after full backup.
 As incremental backup image depends on prior full backup image and 
 incremental images if exists. Manifest file will be used to store the 
 dependency lineage during backup, and used during restore time for PIT 
 restore.  
 h2. Use case(i.e  example)
 {code:title=Incremental Backup Restore example|borderStyle=solid}
 /***/
 /* STEP1:  FULL backup from sourcecluster to targetcluster  
 /* if no table name specified, all tables from source cluster will be 
 backuped 
 /***/
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 ...
 14/05/09 13:35:46 INFO backup.BackupManager: Backup request 
 backup_1399667695966 has been executed.
 /***/
 /* STEP2:   In HBase Shell, put a few rows
 
 /***/
 hbase(main):002:0 put 't1_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):003:0 put 't2_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):004:0 put 't3_dn','row100','cf1:q1','value100_0509_increm1'
 /***/
 /* STEP3:   Take the 1st incremental backup   
  
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:37:45 INFO backup.BackupManager: Backup request 
 backup_1399667851020 has been executed.
 /***/
 /* STEP4:   In HBase Shell, put a few more rows.  
 
 /*   update 'row100', and create new 'row101' 
   
 /***/
 hbase(main):005:0 put 't3_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):006:0 put 't2_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):007:0 put 't1_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):009:0 put 't1_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):010:0 put 't2_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):011:0 put 't3_dn','row101','cf1:q1','value101_0509_increm2'
 /***/
 /* STEP5:   Take the 2nd incremental backup   
 
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:39:33 INFO backup.BackupManager: Backup 

[jira] [Commented] (HBASE-11085) Incremental Backup Restore support

2015-06-24 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600405#comment-14600405
 ] 

Vladimir Rodionov commented on HBASE-11085:
---

{quote}
The patch appears to cause mvn compile goal to fail with Hadoop version 2.2.0.
{quote}

Why is 2.2? Default is 2.5.1. minkdc was introduced in 2.3, no wonder it does 
not compile. 

 Incremental Backup Restore support
 --

 Key: HBASE-11085
 URL: https://issues.apache.org/jira/browse/HBASE-11085
 Project: HBase
  Issue Type: New Feature
Reporter: Demai Ni
Assignee: Vladimir Rodionov
 Attachments: 
 HBASE-11085-trunk-contains-HBASE-10900-trunk-latest.patch, 
 HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v1.patch, 
 HBASE-11085-trunk-v2-contain-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v2.patch, HLogPlayer.java


 h2. Feature Description
 the jira is part of  
 [HBASE-7912|https://issues.apache.org/jira/browse/HBASE-7912], and depend on 
 full backup [HBASE-10900| https://issues.apache.org/jira/browse/HBASE-10900]. 
 for the detail layout and frame work, please reference to  [HBASE-10900| 
 https://issues.apache.org/jira/browse/HBASE-10900].
 When client issues an incremental backup request, BackupManager will check 
 the request and then kicks of a global procedure via HBaseAdmin for all the 
 active regionServer to roll log. Each Region server will record their log 
 number into zookeeper. Then we determine which log need to be included in 
 this incremental backup, and use DistCp to copy them to target location. At 
 the same time, a dependency of backup image will be recorded, and later on 
 saved in Backup Manifest file.
 Restore is to replay the backuped WAL logs on target HBase instance. The 
 replay will occur after full backup.
 As incremental backup image depends on prior full backup image and 
 incremental images if exists. Manifest file will be used to store the 
 dependency lineage during backup, and used during restore time for PIT 
 restore.  
 h2. Use case(i.e  example)
 {code:title=Incremental Backup Restore example|borderStyle=solid}
 /***/
 /* STEP1:  FULL backup from sourcecluster to targetcluster  
 /* if no table name specified, all tables from source cluster will be 
 backuped 
 /***/
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 ...
 14/05/09 13:35:46 INFO backup.BackupManager: Backup request 
 backup_1399667695966 has been executed.
 /***/
 /* STEP2:   In HBase Shell, put a few rows
 
 /***/
 hbase(main):002:0 put 't1_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):003:0 put 't2_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):004:0 put 't3_dn','row100','cf1:q1','value100_0509_increm1'
 /***/
 /* STEP3:   Take the 1st incremental backup   
  
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:37:45 INFO backup.BackupManager: Backup request 
 backup_1399667851020 has been executed.
 /***/
 /* STEP4:   In HBase Shell, put a few more rows.  
 
 /*   update 'row100', and create new 'row101' 
   
 /***/
 hbase(main):005:0 put 't3_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):006:0 put 't2_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):007:0 put 't1_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):009:0 put 't1_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):010:0 put 't2_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):011:0 put 't3_dn','row101','cf1:q1','value101_0509_increm2'
 /***/
 /* STEP5:   Take the 2nd incremental backup   
 
 /***/
 [sourcecluster]$ 

[jira] [Commented] (HBASE-13960) HConnection stuck with UnknownHostException

2015-06-24 Thread Kurt Young (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600490#comment-14600490
 ] 

Kurt Young commented on HBASE-13960:


In RpcClient::createBlockingRpcChannel, when new 
BlockingRpcChannelImplementation(), check the isa and throw IOException if 
error occurred, exception thrown through  HConnectionImplementation::getClient, 
and let RegionServerCallable::prepare fail, client we try again later
i think this maybe enough, but i haven't check all the details of callers who 
called RpcClient::createBlockingRpcChannel

 HConnection stuck with UnknownHostException 
 

 Key: HBASE-13960
 URL: https://issues.apache.org/jira/browse/HBASE-13960
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 0.98.8
Reporter: Kurt Young

 when put/get from hbase, if we meet a temporary dns failure causes resolve 
 RS's host, the error will never recovered. put/get will failed with 
 UnknownHostException forever. 
 I checked the code, and the reason maybe:
 1. when RegionServerCallable or MultiServerCallable prepare(), it gets a  
 ClientService.BlockingInterface stub from Hconnection
 2. In HConnectionImplementation::getClient, it caches the stub with a 
 BlockingRpcChannelImplementation
 3. In BlockingRpcChannelImplementation(), 
  this.isa = new InetSocketAddress(sn.getHostname(), sn.getPort()); If we 
 meet a  temporary dns failure then the address in isa will be null.
 4. then we launch the real rpc call, the following stack is:
 Caused by: java.net.UnknownHostException: unknown host: 
 r101072047.sqa.zmf.tbsite.net
   at 
 org.apache.hadoop.hbase.ipc.RpcClient$Connection.init(RpcClient.java:385)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient.createConnection(RpcClient.java:351)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1523)
   at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1435)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1654)
   at 
 org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1712)
 Besides, i noticed there is a protection in RpcClient:
 if (remoteId.getAddress().isUnresolved()) {
 throw new UnknownHostException(unknown host:  + 
 remoteId.getAddress().getHostName());
   }
 shouldn't we do something when this situation occurred? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13966) Limit column width in table.jsp

2015-06-24 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created HBASE-13966:
---

 Summary: Limit column width in table.jsp
 Key: HBASE-13966
 URL: https://issues.apache.org/jira/browse/HBASE-13966
 Project: HBase
  Issue Type: Bug
Reporter: Jean-Marc Spaggiari
Priority: Minor


In table.jsp, for tables with very wide keys like URLs, the page can be very 
wide. On my own cluster, this page is 8 screens wide, almost un-usable.

Might be good to have a way to resize the columns, or wrap the long keys or 
truncate them, or anything else. When we want to look at the last colums, this 
is a big difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13966) Limit column width in table.jsp

2015-06-24 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-13966:

Component/s: UI

 Limit column width in table.jsp
 ---

 Key: HBASE-13966
 URL: https://issues.apache.org/jira/browse/HBASE-13966
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.1.0.1
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: beginner

 In table.jsp, for tables with very wide keys like URLs, the page can be very 
 wide. On my own cluster, this page is 8 screens wide, almost un-usable.
 Might be good to have a way to resize the columns, or wrap the long keys or 
 truncate them, or anything else. When we want to look at the last colums, 
 this is a big difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13966) Limit column width in table.jsp

2015-06-24 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-13966:

Affects Version/s: 1.1.0.1

 Limit column width in table.jsp
 ---

 Key: HBASE-13966
 URL: https://issues.apache.org/jira/browse/HBASE-13966
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 1.1.0.1
Reporter: Jean-Marc Spaggiari
Priority: Minor
  Labels: beginner

 In table.jsp, for tables with very wide keys like URLs, the page can be very 
 wide. On my own cluster, this page is 8 screens wide, almost un-usable.
 Might be good to have a way to resize the columns, or wrap the long keys or 
 truncate them, or anything else. When we want to look at the last colums, 
 this is a big difficult.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13863) Multi-wal feature breaks reported number and size of HLogs

2015-06-24 Thread Abhilash (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600351#comment-14600351
 ] 

Abhilash commented on HBASE-13863:
--

The failed test does not look related to the patch(patch passes that test on my 
local machine). Trying for re-run.

 Multi-wal feature breaks reported number and size of HLogs
 --

 Key: HBASE-13863
 URL: https://issues.apache.org/jira/browse/HBASE-13863
 Project: HBase
  Issue Type: Bug
  Components: regionserver, UI
Reporter: Elliott Clark
Assignee: Abhilash
 Attachments: HBASE-13863-v1.patch, HBASE-13863-v1.patch, 
 HBASE-13863-v1.patch, HBASE-13863-v1.patch, HBASE-13863.patch


 When multi-wal is enabled the number and size of retained HLogs is always 
 reported as zero.
 We should fix this so that the numbers are the sum of all retained logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13863) Multi-wal feature breaks reported number and size of HLogs

2015-06-24 Thread Abhilash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhilash updated HBASE-13863:
-
Attachment: HBASE-13863-v1.patch

 Multi-wal feature breaks reported number and size of HLogs
 --

 Key: HBASE-13863
 URL: https://issues.apache.org/jira/browse/HBASE-13863
 Project: HBase
  Issue Type: Bug
  Components: regionserver, UI
Reporter: Elliott Clark
Assignee: Abhilash
 Attachments: HBASE-13863-v1.patch, HBASE-13863-v1.patch, 
 HBASE-13863-v1.patch, HBASE-13863-v1.patch, HBASE-13863.patch


 When multi-wal is enabled the number and size of retained HLogs is always 
 reported as zero.
 We should fix this so that the numbers are the sum of all retained logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13964) Skip region normalization for tables under namespace quota

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13964:
---
Summary: Skip region normalization for tables under namespace quota  (was: 
Region normalization for tables under namespace quota)

 Skip region normalization for tables under namespace quota
 --

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-branch-1-v2.txt, 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13835) KeyValueHeap.current might be in heap when exception happens in pollRealKV

2015-06-24 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600548#comment-14600548
 ] 

Enis Soztutar commented on HBASE-13835:
---

+1.

 KeyValueHeap.current might be in heap when exception happens in pollRealKV
 --

 Key: HBASE-13835
 URL: https://issues.apache.org/jira/browse/HBASE-13835
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HBASE-13835-001.patch, HBASE-13835-002.patch, 
 HBASE-13835-002.patch, HBASE-13835-branch1-001.patch, 
 HBASE-13835_branch-1.patch


 In a 0.94 hbase cluster, we found a NPE with following stack:
 {code}
 Exception in thread regionserver21600.leaseChecker 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1530)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:201)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:191)
 at 
 java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641)
 at java.util.PriorityQueue.siftDown(PriorityQueue.java:612)
 at java.util.PriorityQueue.poll(PriorityQueue.java:523)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:241)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:355)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:237)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:4302)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer$ScannerListener.leaseExpired(HRegionServer.java:3033)
 at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:119)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Before this NPE exception, there is an exception happens in pollRealKV, which 
 we think is the culprit of the NPE.
 {code}
 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
 java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
 reader reader=
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:180)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:371)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:366)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:116)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:455)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4124)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4196)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4067)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4057)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.internalNext(HRegionServer.java:2898)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2833)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2815)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:337)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1583)
 {code}
 Simply put, if there is an exception happens in pollRealKV( ), the 
 KeyValueHeap.current might be in heap. Later on, when KeyValueHeap.close( ) 
 is called, the current would be closed firstly. However, since it might still 
 be in the heap, it would either be closed again or its peek() (which is null 
 after it is closed) is called by the heap's poll().  Neither case is expected.
 Although it is caught in 0.94, it is still in the trunk from the code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-13964:
---
Attachment: 13964-branch-1-v2.txt

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-branch-1-v2.txt, 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600336#comment-14600336
 ] 

Ted Yu commented on HBASE-13964:


The patch was generated against branch-1.

The null check is on NamespaceTableAndRegionInfo returned from getState().

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13964) Skip region normalization for tables under namespace quota

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600497#comment-14600497
 ] 

Hadoop QA commented on HBASE-13964:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741724/13964-branch-1-v2.txt
  against branch-1 branch at commit 2df3236a4eee48bf723213a7c4ff3d29c832c8cf.
  ATTACHMENT ID: 12741724

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3830 checkstyle errors (more than the master's current 3829 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.normalizer.TestSimpleRegionNormalizerOnCluster
  org.apache.hadoop.hbase.TestRegionRebalancing

 {color:red}-1 core zombie tests{color}.  There are 7 zombie test(s):   
at 
org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient.testRestoreSnapshot(TestRestoreSnapshotFromClient.java:161)
at 
org.apache.hadoop.hbase.util.TestHBaseFsck.testQuarantineMissingRegionDir(TestHBaseFsck.java:2230)
at 
org.apache.hadoop.hbase.client.TestFromClientSide.testListTables(TestFromClientSide.java:4109)
at 
org.apache.hadoop.hbase.regionserver.TestAtomicOperation.testRowMutationMultiThreads(TestAtomicOperation.java:397)
at 
org.apache.hadoop.hbase.client.TestAdmin1.testForceSplit(TestAdmin1.java:953)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14555//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14555//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14555//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14555//console

This message is automatically generated.

 Skip region normalization for tables under namespace quota
 --

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-branch-1-v2.txt, 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13963) use jdk appropriate version of jdk.tools and avoid leaking it

2015-06-24 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-13963:
-
Assignee: Gabor Liptak
Release Note: Do not leak jdk.tools dependency from hbase-annotations
  Status: Patch Available  (was: Open)

 use jdk appropriate version of jdk.tools and avoid leaking it
 -

 Key: HBASE-13963
 URL: https://issues.apache.org/jira/browse/HBASE-13963
 Project: HBase
  Issue Type: Sub-task
  Components: build, documentation
Reporter: Sean Busbey
Assignee: Gabor Liptak
Priority: Critical
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-13963.1.patch


 Right now hbase-annotations uses jdk7 jdk.tools and exposes that to 
 downstream via hbase-client. We need it for building and using our custom 
 doclet, but can improve a couple of things: 
 1) We should be using a jdk.tools version based on our java version (use jdk 
 activated profiles to set it)
 2) We should not be including any jdk.tools version in our hbase-client 
 transitive dependencies (or other downstream-facing artifacts). 
 Unfortunately, system dependencies are included in transitive resolution, so 
 we'll need to exclude it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13963) use jdk appropriate version of jdk.tools and avoid leaking it

2015-06-24 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600554#comment-14600554
 ] 

Gabor Liptak commented on HBASE-13963:
--

Here is an initial patch for 2. Maybe a similar patch needs to be done for 
HADOOP too?

 use jdk appropriate version of jdk.tools and avoid leaking it
 -

 Key: HBASE-13963
 URL: https://issues.apache.org/jira/browse/HBASE-13963
 Project: HBase
  Issue Type: Sub-task
  Components: build, documentation
Reporter: Sean Busbey
Assignee: Gabor Liptak
Priority: Critical
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-13963.1.patch


 Right now hbase-annotations uses jdk7 jdk.tools and exposes that to 
 downstream via hbase-client. We need it for building and using our custom 
 doclet, but can improve a couple of things: 
 1) We should be using a jdk.tools version based on our java version (use jdk 
 activated profiles to set it)
 2) We should not be including any jdk.tools version in our hbase-client 
 transitive dependencies (or other downstream-facing artifacts). 
 Unfortunately, system dependencies are included in transitive resolution, so 
 we'll need to exclude it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13962) Invalid HFile block magic

2015-06-24 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-13962.

Resolution: Invalid

 Invalid HFile block magic
 -

 Key: HBASE-13962
 URL: https://issues.apache.org/jira/browse/HBASE-13962
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.12.1
 Environment: hadoop 1.2.1
 hbase 0.98.12.1
 jdk 1.7.0.79
 os : ubuntu 12.04.1 amd64
Reporter: reaz hedayati

 hi every body
 my table has some cell that load with bulk load scenario and some cells for 
 increment.
 we use 2 job to load data into table, first job use increment in reduce site 
 and second job use bulk load.
 first we run increment job, next run bulk job and run completebulkload job, 
 after that we got this exception:
 2015-06-24 17:40:01,557 INFO  
 [regionserver60020-smallCompactions-1434448531302] regionserver.HRegion: 
 Starting compaction on c2 in region table1,\x04C#P1\x07\x94 
 ,1435065082383.0fe38a6c782600e4d46f1f148144b489.
 2015-06-24 17:40:01,558 INFO  
 [regionserver60020-smallCompactions-1434448531302] regionserver.HStore: 
 Starting compaction of 3 file(s) in c2 of table1,\x04C#P1\x07\x94 
 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. into 
 tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp,
  totalSize=43.1m
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] 
 regionserver.StoreFileInfo: reference 
 'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5'
  to region=d21f8ee8b3c915fd9e1c143a0f1892e5 
 hfile=6b1249a3b474474db5cf6c664f2d98dc
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
 Compacting 
 hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top,
  keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9, 
 earliestPutTs=1434875448405
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
 Compacting 
 hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_,
  keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11, 
 earliestPutTs=1435076732205
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
 Compacting 
 hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_,
  keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12, 
 earliestPutTs=1435136926850
 2015-06-24 17:40:01,560 ERROR 
 [regionserver60020-smallCompactions-1434448531302] 
 regionserver.CompactSplitThread: Compaction failed Request = 
 regionName=table1,\x04C#P1\x07\x94 
 ,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3, 
 fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072
 java.io.IOException: Could not seek 
 StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574, 
 cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:252)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:214)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519)
 at 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Failed to read compressed block at 10930320, 
 onDiskSizeWithoutHeader=22342, preReadHeaderSize=33, header.length=33, header 
 bytes: 
 

[jira] [Commented] (HBASE-13964) Region normalization for tables under namespace quota

2015-06-24 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600348#comment-14600348
 ] 

Mikhail Antonov commented on HBASE-13964:
-

I meant do we need to check param for null before we call #getState,  looks 
like we don't. Thanks [~te...@apache.org], +1.

 Region normalization for tables under namespace quota
 -

 Key: HBASE-13964
 URL: https://issues.apache.org/jira/browse/HBASE-13964
 Project: HBase
  Issue Type: Brainstorming
  Components: Balancer, Usability
Reporter: Mikhail Antonov
Assignee: Ted Yu
 Attachments: 13964-branch-1-v2.txt, 13964-v1.txt


 As [~te...@apache.org] pointed out in HBASE-13103, we need to discuss how to 
 normalize regions of tables under namespace control. What was proposed is to 
 disable normalization of such tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13948) Expand hadoop2 versions built on the pre-commit

2015-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600472#comment-14600472
 ] 

Hudson commented on HBASE-13948:


FAILURE: Integrated in HBase-TRUNK #6600 (See 
[https://builds.apache.org/job/HBase-TRUNK/6600/])
HBASE-13948 Expand hadoop2 versions built on the pre-commit (ndimiduk: rev 
2df3236a4eee48bf723213a7c4ff3d29c832c8cf)
* dev-support/test-patch.properties


 Expand hadoop2 versions built on the pre-commit
 ---

 Key: HBASE-13948
 URL: https://issues.apache.org/jira/browse/HBASE-13948
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: 13948.patch, HBASE-13948-addendum.patch, 
 HBASE-13948.01.patch


 For the HBase 1.1 line I've been validating builds against the following 
 hadoop versions: 2.2.0 2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0. Let's 
 do the same in pre-commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13963) use jdk appropriate version of jdk.tools and avoid leaking it

2015-06-24 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-13963:
-
Attachment: HBASE-13963.1.patch

 use jdk appropriate version of jdk.tools and avoid leaking it
 -

 Key: HBASE-13963
 URL: https://issues.apache.org/jira/browse/HBASE-13963
 Project: HBase
  Issue Type: Sub-task
  Components: build, documentation
Reporter: Sean Busbey
Priority: Critical
 Fix For: 2.0.0, 1.2.0, 1.3.0

 Attachments: HBASE-13963.1.patch


 Right now hbase-annotations uses jdk7 jdk.tools and exposes that to 
 downstream via hbase-client. We need it for building and using our custom 
 doclet, but can improve a couple of things: 
 1) We should be using a jdk.tools version based on our java version (use jdk 
 activated profiles to set it)
 2) We should not be including any jdk.tools version in our hbase-client 
 transitive dependencies (or other downstream-facing artifacts). 
 Unfortunately, system dependencies are included in transitive resolution, so 
 we'll need to exclude it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13962) Invalid HFile block magic

2015-06-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600584#comment-14600584
 ] 

Andrew Purtell commented on HBASE-13962:


Please mail u...@hbase.apache.org for troubleshooting assistance. This is the 
project dev tracker. 

 Invalid HFile block magic
 -

 Key: HBASE-13962
 URL: https://issues.apache.org/jira/browse/HBASE-13962
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.12.1
 Environment: hadoop 1.2.1
 hbase 0.98.12.1
 jdk 1.7.0.79
 os : ubuntu 12.04.1 amd64
Reporter: reaz hedayati

 hi every body
 my table has some cell that load with bulk load scenario and some cells for 
 increment.
 we use 2 job to load data into table, first job use increment in reduce site 
 and second job use bulk load.
 first we run increment job, next run bulk job and run completebulkload job, 
 after that we got this exception:
 2015-06-24 17:40:01,557 INFO  
 [regionserver60020-smallCompactions-1434448531302] regionserver.HRegion: 
 Starting compaction on c2 in region table1,\x04C#P1\x07\x94 
 ,1435065082383.0fe38a6c782600e4d46f1f148144b489.
 2015-06-24 17:40:01,558 INFO  
 [regionserver60020-smallCompactions-1434448531302] regionserver.HStore: 
 Starting compaction of 3 file(s) in c2 of table1,\x04C#P1\x07\x94 
 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. into 
 tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp,
  totalSize=43.1m
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] 
 regionserver.StoreFileInfo: reference 
 'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5'
  to region=d21f8ee8b3c915fd9e1c143a0f1892e5 
 hfile=6b1249a3b474474db5cf6c664f2d98dc
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
 Compacting 
 hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top,
  keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9, 
 earliestPutTs=1434875448405
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
 Compacting 
 hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_,
  keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11, 
 earliestPutTs=1435076732205
 2015-06-24 17:40:01,558 DEBUG 
 [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
 Compacting 
 hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_,
  keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12, 
 earliestPutTs=1435136926850
 2015-06-24 17:40:01,560 ERROR 
 [regionserver60020-smallCompactions-1434448531302] 
 regionserver.CompactSplitThread: Compaction failed Request = 
 regionName=table1,\x04C#P1\x07\x94 
 ,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3, 
 fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072
 java.io.IOException: Could not seek 
 StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574, 
 cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:252)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:214)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299)
 at 
 org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519)
 at 
 org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Failed to read compressed block at 10930320, 
 

[jira] [Commented] (HBASE-13863) Multi-wal feature breaks reported number and size of HLogs

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600520#comment-14600520
 ] 

Hadoop QA commented on HBASE-13863:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12741728/HBASE-13863-v1.patch
  against master branch at commit 2df3236a4eee48bf723213a7c4ff3d29c832c8cf.
  ATTACHMENT ID: 12741728

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14556//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14556//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14556//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14556//console

This message is automatically generated.

 Multi-wal feature breaks reported number and size of HLogs
 --

 Key: HBASE-13863
 URL: https://issues.apache.org/jira/browse/HBASE-13863
 Project: HBase
  Issue Type: Bug
  Components: regionserver, UI
Reporter: Elliott Clark
Assignee: Abhilash
 Attachments: HBASE-13863-v1.patch, HBASE-13863-v1.patch, 
 HBASE-13863-v1.patch, HBASE-13863-v1.patch, HBASE-13863.patch


 When multi-wal is enabled the number and size of retained HLogs is always 
 reported as zero.
 We should fix this so that the numbers are the sum of all retained logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11085) Incremental Backup Restore support

2015-06-24 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-11085:
--
Status: Open  (was: Patch Available)

 Incremental Backup Restore support
 --

 Key: HBASE-11085
 URL: https://issues.apache.org/jira/browse/HBASE-11085
 Project: HBase
  Issue Type: New Feature
Reporter: Demai Ni
Assignee: Vladimir Rodionov
 Attachments: 
 HBASE-11085-trunk-contains-HBASE-10900-trunk-latest.patch, 
 HBASE-11085-trunk-v1-contains-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v1.patch, 
 HBASE-11085-trunk-v2-contain-HBASE-10900-trunk-v4.patch, 
 HBASE-11085-trunk-v2.patch, HLogPlayer.java


 h2. Feature Description
 the jira is part of  
 [HBASE-7912|https://issues.apache.org/jira/browse/HBASE-7912], and depend on 
 full backup [HBASE-10900| https://issues.apache.org/jira/browse/HBASE-10900]. 
 for the detail layout and frame work, please reference to  [HBASE-10900| 
 https://issues.apache.org/jira/browse/HBASE-10900].
 When client issues an incremental backup request, BackupManager will check 
 the request and then kicks of a global procedure via HBaseAdmin for all the 
 active regionServer to roll log. Each Region server will record their log 
 number into zookeeper. Then we determine which log need to be included in 
 this incremental backup, and use DistCp to copy them to target location. At 
 the same time, a dependency of backup image will be recorded, and later on 
 saved in Backup Manifest file.
 Restore is to replay the backuped WAL logs on target HBase instance. The 
 replay will occur after full backup.
 As incremental backup image depends on prior full backup image and 
 incremental images if exists. Manifest file will be used to store the 
 dependency lineage during backup, and used during restore time for PIT 
 restore.  
 h2. Use case(i.e  example)
 {code:title=Incremental Backup Restore example|borderStyle=solid}
 /***/
 /* STEP1:  FULL backup from sourcecluster to targetcluster  
 /* if no table name specified, all tables from source cluster will be 
 backuped 
 /***/
 [sourcecluster]$ hbase backup create full 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir t1_dn,t2_dn,t3_dn
 ...
 14/05/09 13:35:46 INFO backup.BackupManager: Backup request 
 backup_1399667695966 has been executed.
 /***/
 /* STEP2:   In HBase Shell, put a few rows
 
 /***/
 hbase(main):002:0 put 't1_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):003:0 put 't2_dn','row100','cf1:q1','value100_0509_increm1'
 hbase(main):004:0 put 't3_dn','row100','cf1:q1','value100_0509_increm1'
 /***/
 /* STEP3:   Take the 1st incremental backup   
  
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:37:45 INFO backup.BackupManager: Backup request 
 backup_1399667851020 has been executed.
 /***/
 /* STEP4:   In HBase Shell, put a few more rows.  
 
 /*   update 'row100', and create new 'row101' 
   
 /***/
 hbase(main):005:0 put 't3_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):006:0 put 't2_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):007:0 put 't1_dn','row100','cf1:q1','value101_0509_increm2'
 hbase(main):009:0 put 't1_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):010:0 put 't2_dn','row101','cf1:q1','value101_0509_increm2'
 hbase(main):011:0 put 't3_dn','row101','cf1:q1','value101_0509_increm2'
 /***/
 /* STEP5:   Take the 2nd incremental backup   
 
 /***/
 [sourcecluster]$ hbase backup create incremental 
 hdfs://hostname.targetcluster.org:9000/userid/backupdir
 ...
 14/05/09 13:39:33 INFO backup.BackupManager: Backup request 
 backup_1399667959165 has been executed.
 

[jira] [Reopened] (HBASE-13948) Expand hadoop2 versions built on the pre-commit

2015-06-24 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reopened HBASE-13948:
--

2.3 is breaking as well. I'll revert everything for now.

 Expand hadoop2 versions built on the pre-commit
 ---

 Key: HBASE-13948
 URL: https://issues.apache.org/jira/browse/HBASE-13948
 Project: HBase
  Issue Type: Task
  Components: build
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0

 Attachments: 13948.patch, HBASE-13948-addendum.patch


 For the HBase 1.1 line I've been validating builds against the following 
 hadoop versions: 2.2.0 2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0. Let's 
 do the same in pre-commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13962) Invalid HFile block magic

2015-06-24 Thread reaz hedayati (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

reaz hedayati updated HBASE-13962:
--
Description: 
hi every body
my table has some cell that load with bulk load scenario and some cells for 
increment.
we use 2 job to load data into table, first job use increment in reduce site 
and second job use bulk load.
first we run increment job, next run bulk job and run completebulkload job, 
after that we got this exception:
2015-06-24 17:40:01,557 INFO  
[regionserver60020-smallCompactions-1434448531302] regionserver.HRegion: 
Starting compaction on c2 in region table1,\x04C#P1\x07\x94 
,1435065082383.0fe38a6c782600e4d46f1f148144b489.
2015-06-24 17:40:01,558 INFO  
[regionserver60020-smallCompactions-1434448531302] regionserver.HStore: 
Starting compaction of 3 file(s) in c2 of table1,\x04C#P1\x07\x94 
,1435065082383.0fe38a6c782600e4d46f1f148144b489. into 
tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp,
 totalSize=43.1m
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] regionserver.StoreFileInfo: 
reference 
'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5'
 to region=d21f8ee8b3c915fd9e1c143a0f1892e5 
hfile=6b1249a3b474474db5cf6c664f2d98dc
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
Compacting 
hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top,
 keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9, 
earliestPutTs=1434875448405
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
Compacting 
hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_,
 keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11, 
earliestPutTs=1435076732205
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
Compacting 
hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_,
 keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12, 
earliestPutTs=1435136926850
2015-06-24 17:40:01,560 ERROR 
[regionserver60020-smallCompactions-1434448531302] 
regionserver.CompactSplitThread: Compaction failed Request = 
regionName=table1,\x04C#P1\x07\x94 
,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3, 
fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072
java.io.IOException: Could not seek 
StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574, 
cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:252)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:214)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299)
at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113)
at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to read compressed block at 10930320, 
onDiskSizeWithoutHeader=22342, preReadHeaderSize=33, header.length=33, header 
bytes: 
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1549)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1413)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:394)
at 

[jira] [Commented] (HBASE-13835) KeyValueHeap.current might be in heap when exception happens in pollRealKV

2015-06-24 Thread zhouyingchao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599382#comment-14599382
 ] 

zhouyingchao commented on HBASE-13835:
--

Per John's suggestion, attach a patch for branch-1.

 KeyValueHeap.current might be in heap when exception happens in pollRealKV
 --

 Key: HBASE-13835
 URL: https://issues.apache.org/jira/browse/HBASE-13835
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HBASE-13835-001.patch, HBASE-13835-002.patch, 
 HBASE-13835-002.patch, HBASE-13835-branch1-001.patch


 In a 0.94 hbase cluster, we found a NPE with following stack:
 {code}
 Exception in thread regionserver21600.leaseChecker 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1530)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:201)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:191)
 at 
 java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641)
 at java.util.PriorityQueue.siftDown(PriorityQueue.java:612)
 at java.util.PriorityQueue.poll(PriorityQueue.java:523)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:241)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:355)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:237)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:4302)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer$ScannerListener.leaseExpired(HRegionServer.java:3033)
 at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:119)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Before this NPE exception, there is an exception happens in pollRealKV, which 
 we think is the culprit of the NPE.
 {code}
 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
 java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
 reader reader=
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:180)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:371)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:366)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:116)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:455)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4124)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4196)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4067)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4057)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.internalNext(HRegionServer.java:2898)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2833)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2815)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:337)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1583)
 {code}
 Simply put, if there is an exception happens in pollRealKV( ), the 
 KeyValueHeap.current might be in heap. Later on, when KeyValueHeap.close( ) 
 is called, the current would be closed firstly. However, since it might still 
 be in the heap, it would either be closed again or its peek() (which is null 
 after it is closed) is called by the heap's poll().  Neither case is expected.
 Although it is caught in 0.94, it is still in the trunk from the code. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more day after they are expired

2015-06-24 Thread Gururaj Shetty (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599425#comment-14599425
 ] 

Gururaj Shetty commented on HBASE-13670:


Hi [~anoop.hbase]
Attached the patch. Kindly review the same.

Thanks


 [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more 
 day after they are expired
 --

 Key: HBASE-13670
 URL: https://issues.apache.org/jira/browse/HBASE-13670
 Project: HBase
  Issue Type: Improvement
  Components: documentation, mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
Assignee: Gururaj Shetty
 Fix For: hbase-11339

 Attachments: HBASE-13670.patch


 Currently the ExpiredMobFileCleaner cleans the expired mob file according to 
 the date in the mob file name. The minimum unit of the date is day. So 
 ExpiredMobFileCleaner only cleans the expired mob files later for one more 
 day after they are expired. We need to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13835) KeyValueHeap.current might be in heap when exception happens in pollRealKV

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599441#comment-14599441
 ] 

Hadoop QA commented on HBASE-13835:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12741605/HBASE-13835-branch1-001.patch
  against master branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741605

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14543//console

This message is automatically generated.

 KeyValueHeap.current might be in heap when exception happens in pollRealKV
 --

 Key: HBASE-13835
 URL: https://issues.apache.org/jira/browse/HBASE-13835
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HBASE-13835-001.patch, HBASE-13835-002.patch, 
 HBASE-13835-002.patch, HBASE-13835-branch1-001.patch


 In a 0.94 hbase cluster, we found a NPE with following stack:
 {code}
 Exception in thread regionserver21600.leaseChecker 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.KeyValue$KVComparator.compare(KeyValue.java:1530)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:225)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:201)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap$KVScannerComparator.compare(KeyValueHeap.java:191)
 at 
 java.util.PriorityQueue.siftDownUsingComparator(PriorityQueue.java:641)
 at java.util.PriorityQueue.siftDown(PriorityQueue.java:612)
 at java.util.PriorityQueue.poll(PriorityQueue.java:523)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:241)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:355)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:237)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:4302)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer$ScannerListener.leaseExpired(HRegionServer.java:3033)
 at org.apache.hadoop.hbase.regionserver.Leases.run(Leases.java:119)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Before this NPE exception, there is an exception happens in pollRealKV, which 
 we think is the culprit of the NPE.
 {code}
 ERROR org.apache.hadoop.hbase.regionserver.HRegionServer:
 java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
 reader reader=
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:180)
 at 
 org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:371)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:366)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:116)
 at 
 org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:455)
 at 
 org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4124)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4196)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4067)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4057)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.internalNext(HRegionServer.java:2898)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2833)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2815)
 at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:337)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1583)
 {code}
 Simply put, if there is an exception happens in pollRealKV( ), the 
 

[jira] [Updated] (HBASE-13893) Replace HTable with Table in client tests

2015-06-24 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-13893:
--
Attachment: HBASE-13893-v5 (1).patch

Retry

 Replace HTable with Table in client tests
 -

 Key: HBASE-13893
 URL: https://issues.apache.org/jira/browse/HBASE-13893
 Project: HBase
  Issue Type: Bug
  Components: Client, test
Reporter: Jurriaan Mous
Assignee: Jurriaan Mous
 Attachments: HBASE-13893-v1.patch, HBASE-13893-v2.patch, 
 HBASE-13893-v3.patch, HBASE-13893-v3.patch, HBASE-13893-v4.patch, 
 HBASE-13893-v5 (1).patch, HBASE-13893-v5 (1).patch, HBASE-13893-v5 (1).patch, 
 HBASE-13893-v5.patch, HBASE-13893-v5.patch, HBASE-13893-v5.patch, 
 HBASE-13893.patch


 Many client tests reference the HTable implementation instead of the generic 
 Table implementation. It is now not possible to reuse the tests for another 
 Table implementation. This issue focusses on all HTable instances in relevant 
 client tests and thus not all HTable instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13962) Invalid HFile block magic

2015-06-24 Thread reaz hedayati (JIRA)
reaz hedayati created HBASE-13962:
-

 Summary: Invalid HFile block magic
 Key: HBASE-13962
 URL: https://issues.apache.org/jira/browse/HBASE-13962
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.12.1
 Environment: hadoop 1.2.1
hbase 0.98.12.1
jdk 1.7.0.79
os : ubuntu 12.04.1 amd64
Reporter: reaz hedayati


hi every body
my table has some cell that load with bulk load scenario and some cells for 
increment.
we use 2 job to load data into table, first job use increment in reduce site 
and second job use bulk load.
first we run increment job, next run bulk job, after that we got this exception:
2015-06-24 17:40:01,557 INFO  
[regionserver60020-smallCompactions-1434448531302] regionserver.HRegion: 
Starting compaction on c2 in region table1,\x04C#P1\x07\x94 
,1435065082383.0fe38a6c782600e4d46f1f148144b489.
2015-06-24 17:40:01,558 INFO  
[regionserver60020-smallCompactions-1434448531302] regionserver.HStore: 
Starting compaction of 3 file(s) in c2 of table1,\x04C#P1\x07\x94 
,1435065082383.0fe38a6c782600e4d46f1f148144b489. into 
tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp,
 totalSize=43.1m
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] regionserver.StoreFileInfo: 
reference 
'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5'
 to region=d21f8ee8b3c915fd9e1c143a0f1892e5 
hfile=6b1249a3b474474db5cf6c664f2d98dc
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
Compacting 
hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top,
 keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9, 
earliestPutTs=1434875448405
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
Compacting 
hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_,
 keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11, 
earliestPutTs=1435076732205
2015-06-24 17:40:01,558 DEBUG 
[regionserver60020-smallCompactions-1434448531302] compactions.Compactor: 
Compacting 
hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_,
 keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12, 
earliestPutTs=1435136926850
2015-06-24 17:40:01,560 ERROR 
[regionserver60020-smallCompactions-1434448531302] 
regionserver.CompactSplitThread: Compaction failed Request = 
regionName=table1,\x04C#P1\x07\x94 
,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3, 
fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072
java.io.IOException: Could not seek 
StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574, 
cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:252)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.init(StoreScanner.java:214)
at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299)
at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87)
at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112)
at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113)
at 
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519)
at 
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Failed to read compressed block at 10930320, 
onDiskSizeWithoutHeader=22342, preReadHeaderSize=33, header.length=33, header 
bytes: 
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1549)
at 

[jira] [Commented] (HBASE-13961) SnapshotManager#initialize should set snapshotLayoutVersion if it allows to create snapshot with old layout format

2015-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599595#comment-14599595
 ] 

Hadoop QA commented on HBASE-13961:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12741574/HBASE-13961-0.98-v1.patch
  against 0.98 branch at commit b7f241d73b79ec22db2c03cb6b384b76185f0f85.
  ATTACHMENT ID: 12741574

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.3.0 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
24 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14542//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14542//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14542//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14542//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14542//console

This message is automatically generated.

 SnapshotManager#initialize should set snapshotLayoutVersion if it allows to 
 create snapshot with old layout format
 --

 Key: HBASE-13961
 URL: https://issues.apache.org/jira/browse/HBASE-13961
 Project: HBase
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 0.98.13
Reporter: cuijianwei
Priority: Minor
 Attachments: HBASE-13961-0.98-v1.patch


 In 0.98, it seems the snapshot layout version could be configured by 
 hbase.snapshot.format.version. However, SnapshotManager does not set 
 snapshotLayoutVersion in its initialize(...) method so that it always creates 
 snapshot with latest layout format even when hbase.snapshot.format.version 
 is set to SnapshotManifestV1.DESCRIPTOR_VERSION in configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >