[jira] [Commented] (HBASE-18393) hbase shell non-interactive broken

2017-07-18 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091238#comment-16091238
 ] 

Samir Ahmic commented on HBASE-18393:
-

[~mdrob] fix for hirb.rb looks fine and regarding test it may fail also because 
of "hbase" scripts itself for example if JAVA_HOME is not found in env where 
test is running, but i think it is OK since this way we also test "hbase" 
script for potential issues, we just need to ensure env is setup correctly when 
running test.  

> hbase shell non-interactive broken  
> 
>
> Key: HBASE-18393
> URL: https://issues.apache.org/jira/browse/HBASE-18393
> Project: HBase
>  Issue Type: Bug
>  Components: scripts, shell
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Samir Ahmic
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18393.patch, HBASE-18393.v2.patch
>
>
> Here is error for command line:
> {code}
> $ echo "list" | ./hbase shell -n
> 2017-07-17 08:01:09,442 WARN  [main] util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> ERROR NoMethodError: undefined method `encoding' for #
> Did you mean?  set_encoding
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18332) Upgrade asciidoctor-maven-plugin

2017-07-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091305#comment-16091305
 ] 

Hudson commented on HBASE-18332:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3392 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3392/])
HBASE-18332 Upgrade asciidoctor-maven-plugin (misty: rev 
c423dc7950c4746220498b0e0b8884c51c51e77e)
* (edit) src/main/asciidoc/book.adoc
* (edit) pom.xml


> Upgrade asciidoctor-maven-plugin
> 
>
> Key: HBASE-18332
> URL: https://issues.apache.org/jira/browse/HBASE-18332
> Project: HBase
>  Issue Type: Improvement
>  Components: site
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18332.master.001.patch, 
> HBASE-18332.master.002.patch
>
>
> HBASE-18264 upgraded asciidoctor-maven-plugin and asciidoctorj-pdf but it 
> caused build failure for {{site}} goal due to a change in pdfmark generation.
> These plugins were rolled back in HBASE-18320.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18251) Remove unnecessary traversing to the first and last keys in the CellSet

2017-07-18 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-18251:
-
Attachment: HBASE-18251.patch

I've just attached the patch.

The patch uses first/lastEntry().getValue() instead of first/lastKey(). 
Because ConcurrentSkipListMap.put() doesn't replace the key when the map 
previously contained a mapping for the key,  first/lastKey() will return the 
old cell instance even though we overwrite the value by using 
ConcurrentSkipListMap.put(). Therefore we should not use first/lastKey() in 
this case.


> Remove unnecessary traversing to the first and last keys in the CellSet
> ---
>
> Key: HBASE-18251
> URL: https://issues.apache.org/jira/browse/HBASE-18251
> Project: HBase
>  Issue Type: Bug
>Reporter: Anastasia Braginsky
> Attachments: HBASE-18251.patch
>
>
> The implementation of finding the first and last keys in the CellSet is as 
> following:
> {code}
>  public Cell first() {
> return this.delegatee.get(this.delegatee.firstKey());
>   }
>   public Cell last() {
> return this.delegatee.get(this.delegatee.lastKey());
>   }
> {code}
> Recall we have Cell to Cell mapping, therefore the methods bringing the 
> first/last key, which allready return Cell. Thus no need to waist time on the 
> get() method for the same Cell.
> Fix: return just the first/lastKey(), should be at least twice more effective.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18127) Allow regionobserver to optionally skip postPut/postDelete when postBatchMutate was called

2017-07-18 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091334#comment-16091334
 ] 

Abhishek Singh Chouhan commented on HBASE-18127:


I can have a go at this in case no one's working on this.

> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called
> --
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18332) Upgrade asciidoctor-maven-plugin

2017-07-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091213#comment-16091213
 ] 

Hudson commented on HBASE-18332:


FAILURE: Integrated in Jenkins build HBase-2.0 #191 (See 
[https://builds.apache.org/job/HBase-2.0/191/])
HBASE-18332 Upgrade asciidoctor-maven-plugin (misty: rev 
80959b45286f3f8163e463358aeaf8342c002228)
* (edit) src/main/asciidoc/book.adoc
* (edit) pom.xml


> Upgrade asciidoctor-maven-plugin
> 
>
> Key: HBASE-18332
> URL: https://issues.apache.org/jira/browse/HBASE-18332
> Project: HBase
>  Issue Type: Improvement
>  Components: site
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18332.master.001.patch, 
> HBASE-18332.master.002.patch
>
>
> HBASE-18264 upgraded asciidoctor-maven-plugin and asciidoctorj-pdf but it 
> caused build failure for {{site}} goal due to a change in pdfmark generation.
> These plugins were rolled back in HBASE-18320.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18251) Remove unnecessary traversing to the first and last keys in the CellSet

2017-07-18 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-18251:
-
Assignee: Toshihiro Suzuki
  Status: Patch Available  (was: Open)

> Remove unnecessary traversing to the first and last keys in the CellSet
> ---
>
> Key: HBASE-18251
> URL: https://issues.apache.org/jira/browse/HBASE-18251
> Project: HBase
>  Issue Type: Bug
>Reporter: Anastasia Braginsky
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18251.patch
>
>
> The implementation of finding the first and last keys in the CellSet is as 
> following:
> {code}
>  public Cell first() {
> return this.delegatee.get(this.delegatee.firstKey());
>   }
>   public Cell last() {
> return this.delegatee.get(this.delegatee.lastKey());
>   }
> {code}
> Recall we have Cell to Cell mapping, therefore the methods bringing the 
> first/last key, which allready return Cell. Thus no need to waist time on the 
> get() method for the same Cell.
> Fix: return just the first/lastKey(), should be at least twice more effective.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18393) hbase shell non-interactive broken

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091227#comment-16091227
 ] 

Hadoop QA commented on HBASE-18393:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue}  0m  
4s{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue}  0m  
4s{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m  4s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 15s{color} 
| {color:red} hbase-shell in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 52s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}208m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestShell |
|   | hadoop.hbase.client.TestShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18393 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877729/HBASE-18393.v2.patch |
| Optional Tests |  asflicense  rubocop  ruby_lint  javac  javadoc  unit  
findbugs  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 527f74a6ce48 

[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in openReader()

2017-07-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091306#comment-16091306
 ] 

Hudson commented on HBASE-18377:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3392 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3392/])
HBASE-18377 Error handling for FileNotFoundException should consider (tedyu: 
rev 0c2915b48e157724cefee9f0dbe069ce3f04d0d4)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java


> Error handling for FileNotFoundException should consider RemoteException in 
> openReader()
> 
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 18377.branch-1.3.txt, 18377.v1.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18389) Remove byte[] from formal parameter of sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-18 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18389:
-
Status: Open  (was: Patch Available)

> Remove byte[] from formal parameter of sizeOf() of ClassSize, 
> ClassSize.MemoryLayout and ClassSize.UnsafeLayout
> ---
>
> Key: HBASE-18389
> URL: https://issues.apache.org/jira/browse/HBASE-18389
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18389.master.000.patch
>
>
> In ClassSize class and its internal static class, sizeOf() function has 2 
> formal parameters: byte[] b and int len. But the function's internal logic 
> does not use or refer to byte[] b. Could be removed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java|borderStyle=solid}
> // Class of ClassSize
> public static long sizeOf(byte[] b, int len) {
>   return memoryLayout.sizeOf(b, len);
> }
> // Class of ClassSize.MemoryLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len);
> }
> // Class of ClassSize.UnsafeLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len * 
> UnsafeAccess.theUnsafe.ARRAY_BYTE_INDEX_SCALE);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18401) Region Replica broken in branch-1

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091322#comment-16091322
 ] 

Hadoop QA commented on HBASE-18401:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m  1s{color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
40s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 21s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.mapreduce.TestMultiTableSnapshotInputFormat 
|
| Timed out junit tests | 

[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-18 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091256#comment-16091256
 ] 

Anastasia Braginsky commented on HBASE-18375:
-

Hey, [~ram_krish]! No problems, here are more explanations:

bq. So you say that the ByteBuffer inside the 'Chunk' becomes null and so 
referencing the BB inside the chunk gives you NPE or is it the Chunk itself 
being NULL? Can you paste the stack trace here?
What I see is the following scenario (I am running with CellChunkMap):

1. Chunk C is allocated from pool and is used as part of the Segment S. S is 
currently part of the compaction pipeline. C is protected with strong pointer 
as it is data chunk of the CellChunkMap. 
2. Due to the snapshot of the pipeline, segment S is swapped out of the 
pipeline.
3. S is closed, C is removed from strong map and is not referenced from 
anywhere.
4. C is returned to the pool, but in parallel the GC is already freeing C.
5. As a result C's chunk ID is entered to weak map, but it references to null...

So I am getting null when I try to translate C's chunk ID. I will copy paste 
here the stack, but it has heavy printouts all around. It was intensive 
debugging till I understood that scenario.
{code}
2017-07-16 16:03:31,109 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020-inmemoryCompactions-1500216113902]
 regionserver.CompactingMemStore: IN-MEMORY FLUSH: Pushing active segment into 
compaction pipeline [Region: 
usertable,user4599,1500212802830.bf1a03cc3ca0f1788720512a8e9275d0., Store: 
values, values]
2017-07-16 16:03:31,109 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020-inmemoryCompactions-1500216113902]
 regionserver.MemStoreCompactor: Starting the In-Memory Compaction for store 
values
2017-07-16 16:03:31,109 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020-inmemoryCompactions-1500216113902]
 regionserver.MemStoreCompactor: The youngest segment in the in-Memory 
Compaction Pipeline for store values is going to be flattened to the CHUNK_MAP
2017-07-16 16:03:31,225 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16020-inmemoryCompactions-1500216113902]
 regionserver.CellChunkImmutableSegment: Number of new index chunks 3. The old 
data chunks saved while flattening [1, 3, 15, 20, 32, 50, 51, 67, 72, 73, 75, 
96, 121, 132, 155, 197, 199]
2017-07-16 16:03:31,236 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16020] ipc.RpcServer: 
callId: 820212 service: ClientService methodName: Get size: 128 connection: 
10.73.119.93:44692 deadline: 9223372036854775807
java.io.IOException: java.lang.IllegalArgumentException:
 <<< In CellChunkMap, cell must be associated with chunk with chunk ID 199 read 
from offset 947764. We were looking for a cell at global index 47388 and index 
inside chunk 47388, NUM_OF_CELL_REPS_IN_CHUNK: 104857.
 <<< The index chunk in chunk array is Chunk@1992770427 allocs=0waste=2097148. 
Number of chunks in the creator maps: 200. The chunk ID counter: 201. Verifying 
once again week map: java.lang.ref.WeakReference@39f3b87f --> null.
 <<< WEAK MAP: {198=java.lang.ref.WeakReference@44f41b54, 
199=java.lang.ref.WeakReference@39f3b87f, 
200=java.lang.ref.WeakReference@4a68f3b9}.
 <<< STRONG MAP: {1=Chunk@116964628 allocs=13539waste=39, 2=Chunk@577555956 
allocs=13539waste=92, 3=Chunk@1185303638 allocs=13539waste=36, 
4=Chunk@1264486912 allocs=13547waste=151, 5=Chunk@143494754 
allocs=13541waste=52, 6=Chunk@1279980010 allocs=13548waste=48, 
7=Chunk@955957449 allocs=13539waste=50, 8=Chunk@122372956 allocs=13547waste=66, 
9=Chunk@1666265929 allocs=13541waste=121, 10=Chunk@2022660968 
allocs=13544waste=85, 11=Chunk@379396152 allocs=13539waste=57, 
12=Chunk@1259454484 allocs=13544waste=90, 13=Chunk@556824900 
allocs=13547waste=124, 14=Chunk@593895953 allocs=13547waste=107, 
15=Chunk@1552239269 allocs=13539waste=58, 16=Chunk@424747073 
allocs=13539waste=98, 17=Chunk@998867062 allocs=13540waste=126, 
18=Chunk@695032764 allocs=13541waste=79, 19=Chunk@1598815318 
allocs=13540waste=146, 20=Chunk@1334349014 allocs=13539waste=105, 
21=Chunk@947203937 allocs=13547waste=82, 22=Chunk@2064003944 
allocs=13544waste=102, 23=Chunk@2075121938 allocs=13541waste=20, 
24=Chunk@1892287629 allocs=13544waste=119, 25=Chunk@1641362386 
allocs=13539waste=146, 26=Chunk@721601011 allocs=13544waste=123, 
27=Chunk@778107592 allocs=12waste=2095291, 28=Chunk@237923813 
allocs=13541waste=26, 29=Chunk@1177388113 allocs=13547waste=108, 
30=Chunk@2073942078 allocs=13539waste=130, 31=Chunk@1384303423 
allocs=13544waste=68, 32=Chunk@801031631 allocs=13539waste=81, 
33=Chunk@94094181 allocs=13545waste=16, 34=Chunk@282245056 
allocs=13541waste=65, 35=Chunk@1414356438 allocs=13548waste=7, 
36=Chunk@218397229 allocs=13540waste=17, 37=Chunk@1449066499 
allocs=0waste=2097148, 38=Chunk@1404490175 allocs=13547waste=115, 
39=Chunk@1831568882 allocs=13541waste=93, 

[jira] [Commented] (HBASE-18374) RegionServer Metrics improvements

2017-07-18 Thread Abhishek Singh Chouhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091333#comment-16091333
 ] 

Abhishek Singh Chouhan commented on HBASE-18374:


[~apurtell] [~lhofhansl] Let me know if this looks good or not. Was not sure if 
we want to add slowReq counters for these new metrics so for now have not added 
those, can add those too if people want (or later if there's demand for these 
later).

> RegionServer Metrics improvements
> -
>
> Key: HBASE-18374
> URL: https://issues.apache.org/jira/browse/HBASE-18374
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 3.0.0
>
> Attachments: HBASE-18374.master.001.patch, 
> HBASE-18374.master.002.patch, HBASE-18374.master.003.patch
>
>
> At the RS level we have latency metrics for mutate/puts and deletes that are 
> updated per batch (ie. at the end of entire batchop if it contains put/delete 
> update the respective metric) in contrast with append/increment/get metrics 
> that are updated per op. This is a bit ambiguous since the delete and put 
> metrics are updated for multi row mutations that happen to contain a 
> put/delete. We should rename the metric(eg. delete_batch)/add better 
> description. Also we should add metrics for single delete client operations 
> that come through RSRpcServer.mutate path. We should also add metrics for 
> checkAndPut and checkAndDelete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2017-07-18 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091192#comment-16091192
 ] 

Anoop Sam John commented on HBASE-16993:


Yes.
The wastage concern was more like block size is say 46KB but the default 
setting have block size of 49 means 3 KB per block waste.  But the initial 
config can be tuned. We allow that. One can make this to have a 46KB bucket 
size. That is absolutely fine.  The issue is the block (bucket ) size can NOT 
have any value which is NOT a multiple of 256.  I think the reporter also not 
specifically want this.  I feel 256 multiple requirement is very much fine.  We 
dont clearly document this. We should add that in the config description.  Also 
the code should check this and early fail if not acceptable size value.

> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> DATA_BLOCK_ENCODING set to DIFF
> -
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 2.0.0
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> 

[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091203#comment-16091203
 ] 

Hadoop QA commented on HBASE-18390:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
5s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}130m  
2s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877723/HBASE-18390.v03.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 82ab28711539 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0c2915b4 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7691/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7691/artifact/patchprocess/whitespace-eol.txt
 |
|  

[jira] [Commented] (HBASE-16312) update jquery version

2017-07-18 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091184#comment-16091184
 ] 

Peter Somogyi commented on HBASE-16312:
---

[~busbey] could you take a look?

> update jquery version
> -
>
> Key: HBASE-16312
> URL: https://issues.apache.org/jira/browse/HBASE-16312
> Project: HBase
>  Issue Type: Improvement
>  Components: dependencies, UI
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Sean Busbey
>Assignee: Peter Somogyi
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-16312.master.001.patch
>
>
> the jquery version we bundle for our web ui is EOM. update to latest jquery 
> 3.y.
> we can use the [jquery migrate 
> plugin|http://jquery.com/download/#jquery-migrate-plugin] to help update APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091170#comment-16091170
 ] 

ramkrishna.s.vasudevan commented on HBASE-18375:


bq.3. When S is closed, C is returned to ChunkCreator, which in turn returns C 
to the pool, but in parrallel the GC is already freeing C's "unreachable" 
ByteBuffer.
and this
bq. However, while S is closed the chunks are released to ChunkCreator and the 
following code is invoked:
So you say that the ByteBuffer inside the 'Chunk' becomes null and so 
referencing the BB inside the chunk gives you NPE or is it the Chunk itself 
being NULL? Can you paste the stack trace here?
{code}
Chunk chunk = ChunkCreator.this.removeChunk(chunkId);
if (chunk != null) {
  if (chunk.isFromPool() && toAdd > 0) {
reclaimedChunks.add(chunk);
  }
  toAdd--;
}
{code}
So here if we have 'chunk' ref available then there is no problem and you see 
that some time you don't get the chunk ref itself and so it is not getting 
added to reclaimedChunks at all? And so next time you are not able to poll from 
the reclaimedchunks?
If you are always going with strongChunkMap if there is a pool then that 
'saveFromGC' also can be avoided while using CellChunkMap if there was a pool 
already in place? Sorry for taking my time as I just want to be clear on the 
problem. I may be missing something as you have already seen the issue but it 
is just to be sure that we are on the same page.


> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch, HBASE-18375-V02.patch, 
> HBASE-18375-V03.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18332) Upgrade asciidoctor-maven-plugin

2017-07-18 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091181#comment-16091181
 ] 

Peter Somogyi commented on HBASE-18332:
---

Thank you for the reviews [~misty] and [~stack]!

> Upgrade asciidoctor-maven-plugin
> 
>
> Key: HBASE-18332
> URL: https://issues.apache.org/jira/browse/HBASE-18332
> Project: HBase
>  Issue Type: Improvement
>  Components: site
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18332.master.001.patch, 
> HBASE-18332.master.002.patch
>
>
> HBASE-18264 upgraded asciidoctor-maven-plugin and asciidoctorj-pdf but it 
> caused build failure for {{site}} goal due to a change in pdfmark generation.
> These plugins were rolled back in HBASE-18320.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091178#comment-16091178
 ] 

ramkrishna.s.vasudevan commented on HBASE-16993:


So since the initial idea of int + byte was saving the per entry overhead we 
are going to have it the same way?
So for the actual problem here since they had not been having blocks of 
multiple of 256, they ended up in more wastage as the calculaton was taking 4 + 
1 way of calculating offset and so the suggestion is to adjust their block 
sizes accordingly (i mean the bucket cache block size config)?


> BucketCache throw java.io.IOException: Invalid HFile block magic when 
> DATA_BLOCK_ENCODING set to DIFF
> -
>
> Key: HBASE-16993
> URL: https://issues.apache.org/jira/browse/HBASE-16993
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache, io
>Affects Versions: 1.1.3
> Environment: hbase version 1.1.3
>Reporter: liubangchen
>Assignee: liubangchen
> Fix For: 2.0.0
>
> Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, 
> HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, 
> HBASE-16993.master.003.patch, HBASE-16993.master.004.patch, 
> HBASE-16993.master.005.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> hbase-site.xml setting
> 
> hbase.bucketcache.bucket.sizes
> 16384,32768,40960, 
> 46000,49152,51200,65536,131072,524288
> 
> 
> hbase.bucketcache.size
> 16384
> 
> 
> hbase.bucketcache.ioengine
> offheap
> 
> 
> hfile.block.cache.size
> 0.3
> 
> 
> hfile.block.bloom.cacheonwrite
> true
> 
> 
> hbase.rs.cacheblocksonwrite
> true
> 
> 
> hfile.block.index.cacheonwrite
> true
>  n_splits = 200
> create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => 
> 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => 
> {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => 
> 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| 
> "user#{1000+i*(-1000)/n_splits}"}}
> load data
> bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> recordcount=2 -p insertorder=hashed -p insertstart=0 -p 
> clientbuffering=true -p durability=SKIP_WAL -threads 20 -s 
> run 
> bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p 
> columnfamily=family -p fieldcount=10 -p fieldlength=100 -p 
> operationcount=2000 -p readallfields=true -p clientbuffering=true -p 
> requestdistribution=zipfian  -threads 10 -s
> log info
> 2016-11-02 20:20:20,261 ERROR 
> [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: 
> Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket 
> cache
> java.io.IOException: Invalid HFile block magic: 
> \x00\x00\x00\x00\x00\x00\x00\x00
> at 
> org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154)
> at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427)
> at 
> org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363)
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369)
> at 
> 

[jira] [Commented] (HBASE-18401) Region Replica broken in branch-1

2017-07-18 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091199#comment-16091199
 ] 

huaxiang sun commented on HBASE-18401:
--

Will do, thanks [~te...@apache.org]

> Region Replica broken in branch-1
> -
>
> Key: HBASE-18401
> URL: https://issues.apache.org/jira/browse/HBASE-18401
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Attachments: HBASE-18401-branch-1.2-v001.patch
>
>
> Read replica is broken in branch-1, after the region split, we saw replica 
> region as content in hbase:meta, while the previous behavior is that replica 
> region should not show up in info:regioninfo.
> {code}
> t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:regioninfo, 
> timestamp=1500340472406, value={ENCODED => d8faa669dde775c323f6e55fd5aa36e0, 
> NAME => 't1,r2111,1500340472229_0001.d8faa669dde7
>  e73ca. 75c323f6e55fd5aa36e0.', 
> STARTKEY => 'r2111', ENDKEY => 'r2', REPLICA_ID => 1} 
> 
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen, timestamp=1500340472379, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
>  
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen_0001, timestamp=1500340472406, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server, 
> timestamp=1500340472379, value=dhcp-172-16-1-203.pa.cloudera.com:59105
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server_0001, 
> timestamp=1500340472406, value=dhcp-172-16-1-203.pa.cloudera.com:59105
>
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode, timestamp=1500340472379, value=1500340443589 
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode_0001, timestamp=1500340472406, 
> value=\x00\x00\x01]SBY\xC5
> {code}
> This was introduced by 
> https://github.com/apache/hbase/blame/branch-1-HBASE-18147/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1464
> It does not consider that case that regionInfo could come from a replica 
> region.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18401) Region Replica broken in branch-1

2017-07-18 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-18401:
-
Attachment: HBASE-18401-branch-1.2-v002.patch

Attach v2 patch, which removes some unused imports in TestHBaseFsck, this is to 
trigger the hbase-server unittest.

> Region Replica broken in branch-1
> -
>
> Key: HBASE-18401
> URL: https://issues.apache.org/jira/browse/HBASE-18401
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Attachments: HBASE-18401-branch-1.2-v001.patch, 
> HBASE-18401-branch-1.2-v002.patch
>
>
> Read replica is broken in branch-1, after the region split, we saw replica 
> region as content in hbase:meta, while the previous behavior is that replica 
> region should not show up in info:regioninfo.
> {code}
> t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:regioninfo, 
> timestamp=1500340472406, value={ENCODED => d8faa669dde775c323f6e55fd5aa36e0, 
> NAME => 't1,r2111,1500340472229_0001.d8faa669dde7
>  e73ca. 75c323f6e55fd5aa36e0.', 
> STARTKEY => 'r2111', ENDKEY => 'r2', REPLICA_ID => 1} 
> 
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen, timestamp=1500340472379, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
>  
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen_0001, timestamp=1500340472406, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server, 
> timestamp=1500340472379, value=dhcp-172-16-1-203.pa.cloudera.com:59105
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server_0001, 
> timestamp=1500340472406, value=dhcp-172-16-1-203.pa.cloudera.com:59105
>
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode, timestamp=1500340472379, value=1500340443589 
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode_0001, timestamp=1500340472406, 
> value=\x00\x00\x01]SBY\xC5
> {code}
> This was introduced by 
> https://github.com/apache/hbase/blame/branch-1-HBASE-18147/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1464
> It does not consider that case that regionInfo could come from a replica 
> region.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-18 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091208#comment-16091208
 ] 

Phil Yang commented on HBASE-18390:
---

I think findbugs warning is unrelated. And whitespace can be fixed while 
committing.

bq. ConnectionUtils.addJitter is useless. Can we git rid of it?

Yes, can remove it and its tests while committing.

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-18367:

Attachment: HBASE-18367.003.patch

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch, 
> HBASE-18367.003.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-18367:

Attachment: (was: HBASE-18367.003.patch)

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-18367:

Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch, 
> HBASE-18367.003.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091461#comment-16091461
 ] 

Balazs Meszaros commented on HBASE-18367:
-

Thanks for the review [~stack]!

I have also uploaded my patch to RB: https://reviews.apache.org/r/60796/

I followed your advices. I created FailedProcedure because previously 
{{setFailureResultForNonce()}} created a ProcedureInfo object and set its 
fields according to the failure. It is not possible to create a Procedure 
object with these fields, because these fields are private in Procedure and 
there are not any public setters for them. So we have to create a new class 
unless we have a reference to a Procedure object.

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch, 
> HBASE-18367.003.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-18 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091554#comment-16091554
 ] 

Chia-Ping Tsai commented on HBASE-18390:


{noformat}
There is an interesting side effect: the client is informed immediately that 
the regionserver died, so immediately goes to .meta. As the recovery is not 
done, .meta. contains the same (dead) location, so the client fails again and 
comes back immediately to .meta. => We're hammering .meta. now. The easy fix is 
to add a ~10s sleep on the client. A possibly better fix from a mttr point of 
view would be to have the master sending messages to say that a server recovery 
is finished. I will go for the former first.
{noformat}
What do you think about the comment from HBASE-7590? Does the side effect come 
back after this patch is merged? If no, +1 from me.

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18401) Region Replica shows up in meta table after split in branch-1

2017-07-18 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-18401:

Component/s: regionserver

> Region Replica shows up in meta table after split in branch-1
> -
>
> Key: HBASE-18401
> URL: https://issues.apache.org/jira/browse/HBASE-18401
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.3.1, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18401-branch-1.2-v001.patch, 
> HBASE-18401-branch-1.2-v002.patch
>
>
> Read replica is broken in branch-1, after the region split, we saw replica 
> region as content in hbase:meta, while the previous behavior is that replica 
> region should not show up in info:regioninfo.
> {code}
> t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:regioninfo, 
> timestamp=1500340472406, value={ENCODED => d8faa669dde775c323f6e55fd5aa36e0, 
> NAME => 't1,r2111,1500340472229_0001.d8faa669dde7
>  e73ca. 75c323f6e55fd5aa36e0.', 
> STARTKEY => 'r2111', ENDKEY => 'r2', REPLICA_ID => 1} 
> 
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen, timestamp=1500340472379, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
>  
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen_0001, timestamp=1500340472406, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server, 
> timestamp=1500340472379, value=dhcp-172-16-1-203.pa.cloudera.com:59105
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server_0001, 
> timestamp=1500340472406, value=dhcp-172-16-1-203.pa.cloudera.com:59105
>
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode, timestamp=1500340472379, value=1500340443589 
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode_0001, timestamp=1500340472406, 
> value=\x00\x00\x01]SBY\xC5
> {code}
> This was introduced by 
> https://github.com/apache/hbase/blame/branch-1-HBASE-18147/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1464
> It does not consider that case that regionInfo could come from a replica 
> region.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18401) Region Replica shows up in meta table after split in branch-1

2017-07-18 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-18401:

Summary: Region Replica shows up in meta table after split in branch-1  
(was: Region Replica broken in branch-1)

> Region Replica shows up in meta table after split in branch-1
> -
>
> Key: HBASE-18401
> URL: https://issues.apache.org/jira/browse/HBASE-18401
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.3.1, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18401-branch-1.2-v001.patch, 
> HBASE-18401-branch-1.2-v002.patch
>
>
> Read replica is broken in branch-1, after the region split, we saw replica 
> region as content in hbase:meta, while the previous behavior is that replica 
> region should not show up in info:regioninfo.
> {code}
> t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:regioninfo, 
> timestamp=1500340472406, value={ENCODED => d8faa669dde775c323f6e55fd5aa36e0, 
> NAME => 't1,r2111,1500340472229_0001.d8faa669dde7
>  e73ca. 75c323f6e55fd5aa36e0.', 
> STARTKEY => 'r2111', ENDKEY => 'r2', REPLICA_ID => 1} 
> 
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen, timestamp=1500340472379, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
>  
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen_0001, timestamp=1500340472406, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server, 
> timestamp=1500340472379, value=dhcp-172-16-1-203.pa.cloudera.com:59105
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server_0001, 
> timestamp=1500340472406, value=dhcp-172-16-1-203.pa.cloudera.com:59105
>
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode, timestamp=1500340472379, value=1500340443589 
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode_0001, timestamp=1500340472406, 
> value=\x00\x00\x01]SBY\xC5
> {code}
> This was introduced by 
> https://github.com/apache/hbase/blame/branch-1-HBASE-18147/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1464
> It does not consider that case that regionInfo could come from a replica 
> region.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18401) Region Replica shows up in meta table after split in branch-1

2017-07-18 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-18401:

Fix Version/s: 1.2.7
   1.3.2
   1.4.0

> Region Replica shows up in meta table after split in branch-1
> -
>
> Key: HBASE-18401
> URL: https://issues.apache.org/jira/browse/HBASE-18401
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.3.1, 1.2.6
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18401-branch-1.2-v001.patch, 
> HBASE-18401-branch-1.2-v002.patch
>
>
> Read replica is broken in branch-1, after the region split, we saw replica 
> region as content in hbase:meta, while the previous behavior is that replica 
> region should not show up in info:regioninfo.
> {code}
> t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:regioninfo, 
> timestamp=1500340472406, value={ENCODED => d8faa669dde775c323f6e55fd5aa36e0, 
> NAME => 't1,r2111,1500340472229_0001.d8faa669dde7
>  e73ca. 75c323f6e55fd5aa36e0.', 
> STARTKEY => 'r2111', ENDKEY => 'r2', REPLICA_ID => 1} 
> 
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen, timestamp=1500340472379, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
>  
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen_0001, timestamp=1500340472406, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server, 
> timestamp=1500340472379, value=dhcp-172-16-1-203.pa.cloudera.com:59105
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server_0001, 
> timestamp=1500340472406, value=dhcp-172-16-1-203.pa.cloudera.com:59105
>
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode, timestamp=1500340472379, value=1500340443589 
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode_0001, timestamp=1500340472406, 
> value=\x00\x00\x01]SBY\xC5
> {code}
> This was introduced by 
> https://github.com/apache/hbase/blame/branch-1-HBASE-18147/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1464
> It does not consider that case that regionInfo could come from a replica 
> region.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18403) [Shell]Truncate permission required

2017-07-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob reassigned HBASE-18403:
-

Assignee: Yun Zhao

[~Yun Zhao] - I've added you to our contributor group and assigned the issue to 
you. In the future you will be able to self-assign issues you choose to work on.

It looks like you're already familiar with our patch submission conventions, 
automated QA should be by in a few hours to test your patch as well.

Glad to have you working with us!

> [Shell]Truncate permission required
> ---
>
> Key: HBASE-18403
> URL: https://issues.apache.org/jira/browse/HBASE-18403
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Trivial
> Attachments: HBASE-18403.patch
>
>
> When a user has only (Create) permission to execute truncate, the table will 
> be deleted and not re-created



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091594#comment-16091594
 ] 

Hadoop QA commented on HBASE-18367:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
42s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
1s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 57s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestReplicasClient |
|   | org.apache.hadoop.hbase.client.TestScanWithoutFetchingData |
|   | org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface |
|   | org.apache.hadoop.hbase.mapreduce.TestWALPlayer |
|   | org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence |
|   | org.apache.hadoop.hbase.mapreduce.TestHRegionPartitioner |
|   | org.apache.hadoop.hbase.mapreduce.TestTableInputFormat |
|   | org.apache.hadoop.hbase.client.TestAsyncTableScanAll |
|   | org.apache.hadoop.hbase.client.TestAsyncReplicationAdminApi |
|   | org.apache.hadoop.hbase.client.TestFromClientSide |
|   | org.apache.hadoop.hbase.client.TestMultipleTimestamps |
|   | org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries |
|   | org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook |
|   | 

[jira] [Updated] (HBASE-12349) Add Maven build support module for a custom version of error-prone

2017-07-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-12349:
--
Status: Open  (was: Patch Available)

> Add Maven build support module for a custom version of error-prone
> --
>
> Key: HBASE-12349
> URL: https://issues.apache.org/jira/browse/HBASE-12349
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Mike Drob
> Fix For: 2.0.0
>
> Attachments: HBASE-12349.patch, HBASE-12349.v2.patch, 
> HBASE-12349.v3.patch, HBASE-12349.v4.patch
>
>
> Add a new Maven build support module that builds and publishes a custom 
> error-prone artifact for use by the rest of the build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-12349) Add Maven build support module for a custom version of error-prone

2017-07-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-12349:
--
Status: Patch Available  (was: Open)

> Add Maven build support module for a custom version of error-prone
> --
>
> Key: HBASE-12349
> URL: https://issues.apache.org/jira/browse/HBASE-12349
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Mike Drob
> Fix For: 2.0.0
>
> Attachments: HBASE-12349.patch, HBASE-12349.v2.patch, 
> HBASE-12349.v3.patch, HBASE-12349.v4.patch
>
>
> Add a new Maven build support module that builds and publishes a custom 
> error-prone artifact for use by the rest of the build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16488) Starting namespace and quota services in master startup asynchronizely

2017-07-18 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091568#comment-16091568
 ] 

Stephen Yuan Jiang commented on HBASE-16488:


V10 patch in branch-1 is approved by [~enis].  

Most tests are passed in pre-commit.  In failed UT, I checked the source code 
and don't think they are related to this change.  I re-run those tests locally, 
and all except one passed.  

The only test that fails consistently in my local machine is 
{{org.apache.hadoop.hbase.regionserver.TestRSKilledWhenInitializing.testRSTerminationAfterRegisteringToMasterBeforeCreatingEphemeralNode}}
 - I spent some time to debug it and don't think this is related to this 
change.  The test kills one RS and assert that server manager thinks this RS is 
not online.   Without any change, the test passed in my local machine 
consistently.  I added some logging in the test (just some LOG.info statements 
inside the test, no other changes) and see what is going on, it would fail 
consistently that server manager thinks RS is still online.  If I add some 
waiting before assert, the test would pass with about 600ms wait in my local 
machine.  This is with only log info messages in test and no real change.  
Seems there is a delay between "mini cluster get live server thinks the RS is 
dead" and "master server manager remove the RS from the online server list".  
With the patch, the same is true, with about 600ms delay (has nothing to do 
with namespace), the test passed.  I think this is test issue and if it 
consistently repro in pre-commit.  I will fix the test in a separate JIRA.

> Starting namespace and quota services in master startup asynchronizely
> --
>
> Key: HBASE-16488
> URL: https://issues.apache.org/jira/browse/HBASE-16488
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 2.0.0, 1.3.0, 1.0.3, 1.4.0, 1.1.5, 1.2.2
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Attachments: HBASE-16488.v10-branch-1.patch, 
> HBASE-16488.v1-branch-1.patch, HBASE-16488.v1-master.patch, 
> HBASE-16488.v2-branch-1.patch, HBASE-16488.v2-branch-1.patch, 
> HBASE-16488.v3-branch-1.patch, HBASE-16488.v3-branch-1.patch, 
> HBASE-16488.v4-branch-1.patch, HBASE-16488.v5-branch-1.patch, 
> HBASE-16488.v6-branch-1.patch, HBASE-16488.v7-branch-1.patch, 
> HBASE-16488.v8-branch-1.patch, HBASE-16488.v9-branch-1.patch
>
>
> From time to time, during internal IT test and from customer, we often see 
> master initialization failed due to namespace table region takes long time to 
> assign (eg. sometimes split log takes long time or hanging; or sometimes RS 
> is temporarily not available; sometimes due to some unknown assignment 
> issue).  In the past, there was some proposal to improve this situation, eg. 
> HBASE-13556 / HBASE-14190 (Assign system tables ahead of user region 
> assignment) or HBASE-13557 (Special WAL handling for system tables) or  
> HBASE-14623 (Implement dedicated WAL for system tables).  
> This JIRA proposes another way to solve this master initialization fail 
> issue: namespace service is only used by a handful operations (eg. create 
> table / namespace DDL / get namespace API / some RS group DDL).  Only quota 
> manager depends on it and quota management is off by default.  Therefore, 
> namespace service is not really needed for master to be functional.  So we 
> could start namespace service asynchronizely without blocking master startup.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in openReader()

2017-07-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18377:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-alpha-2
   1.5.0
   1.4.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the review, Ashish

> Error handling for FileNotFoundException should consider RemoteException in 
> openReader()
> 
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 3.0.0, 1.4.0, 1.5.0, 2.0.0-alpha-2
>
> Attachments: 18377.branch-1.3.txt, 18377.v1.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-18 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091581#comment-16091581
 ] 

Duo Zhang commented on HBASE-18390:
---

If it is 10s after the first notification message I think it is OK, but it 
seems not. And I think a exponential back off is enough for most cases?

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18392) Add default value of ----movetimeout to rolling-restart.sh

2017-07-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18392:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Add default value of movetimeout to rolling-restart.sh
> --
>
> Key: HBASE-18392
> URL: https://issues.apache.org/jira/browse/HBASE-18392
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 3.0.0
>
> Attachments: HBASE-18392-master-001.patch
>
>
> We are calling graceful_stop.sh in rolling-restart.sh with following line 
> {code}
> "$bin"/graceful_stop.sh --config ${HBASE_CONF_DIR} --restart --reload -nob 
> --maxthreads  \
> ${RR_MAXTHREADS} ${RR_NOACK} --movetimeout ${RR_MOVE_TIMEOUT} 
> $hostname
> {code} 
> and if we not specified --movetimeout option while calling rolling-restart.sh 
>  --graceful script will not work. My propose is to add default value for this 
> parameter same way we are doing in graceful_stop.sh



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18403) [Shell]Truncate permission required

2017-07-18 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091585#comment-16091585
 ] 

Mike Drob commented on HBASE-18403:
---

Good catch! Can you include a test for this as well?

> [Shell]Truncate permission required
> ---
>
> Key: HBASE-18403
> URL: https://issues.apache.org/jira/browse/HBASE-18403
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yun Zhao
>Priority: Trivial
> Attachments: HBASE-18403.patch
>
>
> When a user has only (Create) permission to execute truncate, the table will 
> be deleted and not re-created



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18403) [Shell]Truncate permission required

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091609#comment-16091609
 ] 

Hadoop QA commented on HBASE-18403:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue}  0m  
0s{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue}  0m  
0s{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 55s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 27s{color} 
| {color:red} hbase-shell in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877802/HBASE-18403.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux befbb715c991 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0c2915b4 |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7698/artifact/patchprocess/patch-unit-hbase-shell.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7698/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7698/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> [Shell]Truncate permission required
> ---
>
> Key: HBASE-18403
> URL: https://issues.apache.org/jira/browse/HBASE-18403
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Trivial
> Attachments: HBASE-18403.patch
>
>
> When a user has only (Create) permission to execute truncate, the table will 
> be deleted and not re-created



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-12349) Add Maven build support module for a custom version of error-prone

2017-07-18 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-12349:
--
Attachment: HBASE-12349.v4.patch

v4: Fix findbugs declaration for new modules.

> Add Maven build support module for a custom version of error-prone
> --
>
> Key: HBASE-12349
> URL: https://issues.apache.org/jira/browse/HBASE-12349
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Mike Drob
> Fix For: 2.0.0
>
> Attachments: HBASE-12349.patch, HBASE-12349.v2.patch, 
> HBASE-12349.v3.patch, HBASE-12349.v4.patch
>
>
> Add a new Maven build support module that builds and publishes a custom 
> error-prone artifact for use by the rest of the build.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091367#comment-16091367
 ] 

ramkrishna.s.vasudevan commented on HBASE-18375:


I think I get it now. Thanks for the explanation.
So here in removeChunk
{code}
  Chunk removeChunk(int chunkId) {
WeakReference weak = this.weakChunkIdMap.remove(chunkId);
Chunk strong = this.strongChunkIdMap.remove(chunkId);
if (weak != null) {
  return weak.get();
}
return strong;
  }
{code}
So this chunk that we return from weak.get() at this point of time has a 
reference. But when we are actually putting it to the reclaimedChunks ie.
{code}
Chunk chunk = ChunkCreator.this.removeChunk(chunkId);
if (chunk != null) {
  if (chunk.isFromPool() && toAdd > 0) {
reclaimedChunks.add(chunk);
  }
{code}
The chunk becomes null. So when we do poll from reclaimedchunks we get a null 
ref and we just try to use that null ref and add it to the weakRefMap again and 
so we are working with null reference through out as per
bq.And yes, it was still possible to poll the chunk from reclaimedChunks, but 
it was deallocated while being in weakMap.
So in case of CellChunkMap atleast since we are sure to have a strong ref is it 
better to always return the strong ref in removeChunk() code rather than the 
weak ref (though it is available)?
Also the change in CompactionPipeline of removing and adding the last, now 
being done at the beginning is it intentional to avoid the NPE bug?

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch, HBASE-18375-V02.patch, 
> HBASE-18375-V03.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18402) Thrift2 should support DeleteFamily and DeleteFamilyVersion type

2017-07-18 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-18402:


 Summary: Thrift2 should support  DeleteFamily and 
DeleteFamilyVersion type
 Key: HBASE-18402
 URL: https://issues.apache.org/jira/browse/HBASE-18402
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 2.0.0-alpha-1
Reporter: Zheng Hu
Assignee: Zheng Hu


Currently,  our thrift2 only support two delete type, Actually, there are four 
delete types.and  we should support the other delete type:  DeleteFamily and 
DeleteFamilyVersion. 

{code}
/**
 * Specify type of delete:
 *  - DELETE_COLUMN means exactly one version will be removed,
 *  - DELETE_COLUMNS means previous versions will also be removed.
 */
enum TDeleteType {
  DELETE_COLUMN = 0,
  DELETE_COLUMNS = 1
}
{code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17738) BucketCache startup is slow

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17738:
---
Attachment: HBASE-17738_8.patch

Updated patch implementing the suggestion with test case.

> BucketCache startup is slow
> ---
>
> Key: HBASE-17738
> URL: https://issues.apache.org/jira/browse/HBASE-17738
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17738_2.patch, HBASE-17738_2.patch, 
> HBASE-17738_3.patch, HBASE-17738_4.patch, HBASE-17738_5_withoutUnsafe.patch, 
> HBASE-17738_6_withoutUnsafe.patch, HBASE-17738_8.patch, HBASE-17738.patch
>
>
> If you set bucketcache size at 64G say and then start hbase, it takes a long 
> time. Can we do the allocations in parallel and not inline with the server 
> startup?
> Related, prefetching on a bucketcache is slow. Speed it up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17738) BucketCache startup is slow

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17738:
---
Status: Patch Available  (was: Open)

> BucketCache startup is slow
> ---
>
> Key: HBASE-17738
> URL: https://issues.apache.org/jira/browse/HBASE-17738
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17738_2.patch, HBASE-17738_2.patch, 
> HBASE-17738_3.patch, HBASE-17738_4.patch, HBASE-17738_5_withoutUnsafe.patch, 
> HBASE-17738_6_withoutUnsafe.patch, HBASE-17738_8.patch, HBASE-17738.patch
>
>
> If you set bucketcache size at 64G say and then start hbase, it takes a long 
> time. Can we do the allocations in parallel and not inline with the server 
> startup?
> Related, prefetching on a bucketcache is slow. Speed it up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17738) BucketCache startup is slow

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17738:
---
Status: Open  (was: Patch Available)

> BucketCache startup is slow
> ---
>
> Key: HBASE-17738
> URL: https://issues.apache.org/jira/browse/HBASE-17738
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17738_2.patch, HBASE-17738_2.patch, 
> HBASE-17738_3.patch, HBASE-17738_4.patch, HBASE-17738_5_withoutUnsafe.patch, 
> HBASE-17738_6_withoutUnsafe.patch, HBASE-17738_8.patch, HBASE-17738.patch
>
>
> If you set bucketcache size at 64G say and then start hbase, it takes a long 
> time. Can we do the allocations in parallel and not inline with the server 
> startup?
> Related, prefetching on a bucketcache is slow. Speed it up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-16788) Race in compacted file deletion between HStore close() and closeAndArchiveCompactedFiles()

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16788:
---
Issue Type: Sub-task  (was: Bug)
Parent: HBASE-18397

> Race in compacted file deletion between HStore close() and 
> closeAndArchiveCompactedFiles()
> --
>
> Key: HBASE-16788
> URL: https://issues.apache.org/jira/browse/HBASE-16788
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 1.3.0
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: 16788-suggest.v2, HBASE-16788.001.patch, 
> HBASE-16788.002.patch, HBASE-16788_1.patch, HBASE-16788-addendum.patch
>
>
> HBASE-13082 changed the way that compacted files are archived from being done 
> inline on compaction completion to an async cleanup by the 
> CompactedHFilesDischarger chore.  It looks like the changes to HStore to 
> support this introduced a race condition in the compacted HFile archiving.
> In the following sequence, we can wind up with two separate threads trying to 
> archive the same HFiles, causing a regionserver abort:
> # compaction completes normally and the compacted files are added to 
> {{compactedfiles}} in HStore's DefaultStoreFileManager
> # *threadA*: CompactedHFilesDischargeHandler runs in a RS executor service, 
> calling closeAndArchiveCompactedFiles()
> ## obtains HStore readlock
> ## gets a copy of compactedfiles
> ## releases readlock
> # *threadB*: calls HStore.close() as part of region close
> ## obtains HStore writelock
> ## calls DefaultStoreFileManager.clearCompactedfiles(), getting a copy of 
> same compactedfiles
> # *threadA*: calls HStore.removeCompactedfiles(compactedfiles)
> ## archives files in {compactedfiles} in HRegionFileSystem.removeStoreFiles()
> ## call HStore.clearCompactedFiles()
> ## waits on write lock
> # *threadB*: continues with close()
> ## calls removeCompactedfiles(compactedfiles)
> ## calls HRegionFIleSystem.removeStoreFiles() -> 
> HFileArchiver.archiveStoreFiles()
> ## receives FileNotFoundException because the files have already been 
> archived by threadA
> ## throws IOException
> # RS aborts
> I think the combination of fetching the compactedfiles list and removing the 
> files needs to be covered by locking.  Options I see are:
> * Modify HStore.closeAndArchiveCompactedFiles(): use writelock instead of 
> readlock and move the call to removeCompactedfiles() inside the lock.  This 
> means the read operations will be blocked while the files are being archived, 
> which is bad.
> * Synchronize closeAndArchiveCompactedFiles() and modify close() to call it 
> instead of calling removeCompactedfiles() directly
> * Add a separate lock for compacted files removal and use in 
> closeAndArchiveCompactedFiles() and close()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-12349) Add Maven build support module for a custom version of error-prone

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091392#comment-16091392
 ] 

Hadoop QA commented on HBASE-12349:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  6m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-testing-util hbase-spark-it hbase-assembly 
hbase-shaded hbase-archetypes . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
58s{color} | {color:red} hbase-protocol-shaded in master has 27 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
39s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hbase-rest in master has 3 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  7m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
43s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
33m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-build-support hbase-build-configuration hbase-resource-bundle 
hbase-testing-util hbase-spark-it hbase-assembly hbase-shaded hbase-archetypes 
. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
8s{color} | {color:red} hbase-error-prone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hbase-build-support in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
9s{color} | {color:green} hbase-error-prone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
7s{color} | {color:green} 

[jira] [Commented] (HBASE-18251) Remove unnecessary traversing to the first and last keys in the CellSet

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091396#comment-16091396
 ] 

Hadoop QA commented on HBASE-18251:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
18s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m  4s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestCellFlatSet |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18251 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1280/HBASE-18251.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux ce6a96e29dbe 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0c2915b4 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7694/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7694/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7694/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7694/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HBASE-17738) BucketCache startup is slow

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091404#comment-16091404
 ] 

Hadoop QA commented on HBASE-17738:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hbase-common generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-common |
|  |  Should org.apache.hadoop.hbase.util.ByteBufferArray$BufferCreatorCallable 
be a _static_ inner class?  At ByteBufferArray.java:inner class?  At 
ByteBufferArray.java:[lines 116-136] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-17738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/1286/HBASE-17738_8.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8e221fd5601f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0c2915b4 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7695/artifact/patchprocess/new-findbugs-hbase-common.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7695/testReport/ |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7695/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> BucketCache startup is slow
> ---

[jira] [Commented] (HBASE-18392) Add default value of ----movetimeout to rolling-restart.sh

2017-07-18 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091402#comment-16091402
 ] 

Samir Ahmic commented on HBASE-18392:
-

[~tedyu] mind committing this i need it in place for 
https://issues.apache.org/jira/browse/HBASE-7386 ?

> Add default value of movetimeout to rolling-restart.sh
> --
>
> Key: HBASE-18392
> URL: https://issues.apache.org/jira/browse/HBASE-18392
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 3.0.0
>
> Attachments: HBASE-18392-master-001.patch
>
>
> We are calling graceful_stop.sh in rolling-restart.sh with following line 
> {code}
> "$bin"/graceful_stop.sh --config ${HBASE_CONF_DIR} --restart --reload -nob 
> --maxthreads  \
> ${RR_MAXTHREADS} ${RR_NOACK} --movetimeout ${RR_MOVE_TIMEOUT} 
> $hostname
> {code} 
> and if we not specified --movetimeout option while calling rolling-restart.sh 
>  --graceful script will not work. My propose is to add default value for this 
> parameter same way we are doing in graceful_stop.sh



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18186) Frequent FileNotFoundExceptions in region server logs

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091401#comment-16091401
 ] 

ramkrishna.s.vasudevan commented on HBASE-18186:


[~mantonov]
I was exactly referring to the same set of steps in my previous comment here 
but later found that when we actually create a scanner for a user scan /get 
{code}
@Override
  public KeyValueScanner getScanner(Scan scan,
  final NavigableSet targetCols, long readPt) throws IOException {
lock.readLock().lock();
try {
  KeyValueScanner scanner = null;
  if (this.getCoprocessorHost() != null) {
scanner = this.getCoprocessorHost().preStoreScannerOpen(this, scan, 
targetCols);
  }
  scanner = createScanner(scan, targetCols, readPt, scanner);
  return scanner;
} finally {
  lock.readLock().unlock();
}
  }
{code}
We hold this entire read lock for a longer duration and hence the above problem 
does not happen. Infact when I had put the last comment some time back I tried 
debugging for the exact case and it just did not happen. May be is there a way 
where we are directly calling
{code}
public List getScanners(boolean cacheBlocks, boolean isGet,
  boolean usePread, boolean isCompaction, ScanQueryMatcher matcher, byte[] 
startRow,
  byte[] stopRow, long readPt) throws IOException {
{code}
And then we may have that problem like recently I saw in the case of 
HBASE-18221 (how ever that feature is not in branch-1.3).

> Frequent FileNotFoundExceptions in region server logs
> -
>
> Key: HBASE-18186
> URL: https://issues.apache.org/jira/browse/HBASE-18186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, Scanners
>Affects Versions: 1.3.1
>Reporter: Ashu Pachauri
>
> We see frequent FileNotFoundException in regionserver logs on multiple code 
> paths trying to reference non existing store files. I know that there have 
> been multiple bugs in store file accounting of compacted store files. 
> Examples include: HBASE-16964 , HBASE-16754 and HBASE-16788.
> Observations:  
> 1. The issue mentioned here also seems to bear a similar flavor, because we 
> are not seeing rampant dataloss given the frequency of these exceptions in 
> the logs. So, it's more likely an accounting issue, but I could be wrong. 
> 2. The frequency with which this happens on scan heavy workload is at least 
> one order of magnitude higher than a mixed workload.
> Stack traces:
> {Code}
> WARN backup.HFileArchiver: Failed to archive class 
> org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, 
> file:hdfs: because it does not exist! 
> Skipping and continuing on.
> java.io.FileNotFoundException: File/Directory // 
> does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setTimes(FSDirAttrOp.java:121)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:1223)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:915)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
>   at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>   at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:3115)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1520)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1516)
>   at 
> 

[jira] [Updated] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-18367:

Attachment: (was: HBASE-18367.003.patch)

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-18367:

Status: Open  (was: Patch Available)

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18402) Thrift2 should support DeleteFamily and DeleteFamilyVersion type

2017-07-18 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu resolved HBASE-18402.
--
Resolution: Not A Problem

In method ThriftUtilities#deleteFromThrift,  if we do not pass a qualifier for 
the TDelete , then the handler will regard the Delete as  a DeleteFamily (or 
DeleteFamilyVersion ), So we do not need a DeleteFamily or DeleteFamilyVersion. 

> Thrift2 should support  DeleteFamily and DeleteFamilyVersion type
> -
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18402) Thrift2 should support DeleteFamily and DeleteFamilyVersion type

2017-07-18 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-18402:
-
Description: 
Currently,  our thrift2 only support two delete types, Actually, there are four 
delete types.and  we should support the other delete type:  DeleteFamily and 
DeleteFamilyVersion. 

{code}
/**
 * Specify type of delete:
 *  - DELETE_COLUMN means exactly one version will be removed,
 *  - DELETE_COLUMNS means previous versions will also be removed.
 */
enum TDeleteType {
  DELETE_COLUMN = 0,
  DELETE_COLUMNS = 1
}
{code} 

  was:
Currently,  our thrift2 only support two delete type, Actually, there are four 
delete types.and  we should support the other delete type:  DeleteFamily and 
DeleteFamilyVersion. 

{code}
/**
 * Specify type of delete:
 *  - DELETE_COLUMN means exactly one version will be removed,
 *  - DELETE_COLUMNS means previous versions will also be removed.
 */
enum TDeleteType {
  DELETE_COLUMN = 0,
  DELETE_COLUMNS = 1
}
{code} 


> Thrift2 should support  DeleteFamily and DeleteFamilyVersion type
> -
>
> Key: HBASE-18402
> URL: https://issues.apache.org/jira/browse/HBASE-18402
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 2.0.0-alpha-1
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>
> Currently,  our thrift2 only support two delete types, Actually, there are 
> four delete types.and  we should support the other delete type:  DeleteFamily 
> and DeleteFamilyVersion. 
> {code}
> /**
>  * Specify type of delete:
>  *  - DELETE_COLUMN means exactly one version will be removed,
>  *  - DELETE_COLUMNS means previous versions will also be removed.
>  */
> enum TDeleteType {
>   DELETE_COLUMN = 0,
>   DELETE_COLUMNS = 1
> }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18399) Files in a snapshot can go missing even after the snapshot is taken successfully

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091390#comment-16091390
 ] 

ramkrishna.s.vasudevan commented on HBASE-18399:


bq.store_file_A is marked as compacted away and HFileArchiver moves the file to 
archive.
Ya so the file is in archive. The remaining steps are irrespective of the store 
file accounting feature right? Just asking.

> Files in a snapshot can go missing even after the snapshot is taken 
> successfully
> 
>
> Key: HBASE-18399
> URL: https://issues.apache.org/jira/browse/HBASE-18399
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: Ashu Pachauri
> Fix For: 1.3.2
>
>
> Files missing after the snapshot is taken (only applicable when the TTL for 
> the TimeToLiveHFileCleaner is small, like the default 5 mins)
> * SnapshotManifest#addRegion visits store_file_A, but is yet to write it 
> to the manifest.
> * store_file_A is marked as compacted away and HFileArchiver moves the 
> file to archive.
> * HFileCleaner comes in and sees the store_file_A in archive. It adds the 
> file to the list of files that might need to be cleaned up.
> * HFileCleaner's SnapshotHFileCleaner plugin is kicked in.
> * SnapshotFileCache#getUnreferencedFiles also says that store_file_A is 
> unreferenced and should be cleaned up (It has not yet been written to the 
> manifest).
> * SnapshotHFileCleaner is still going through rest of the files in 
> archive.
> * store_file_A reference is created and written to snapshot manifest.
> * Snapshot verification runs and sees the store_file_A is present in 
> archive, and thus the verification passes.
> * Now, the SnapshotHFileCleaner finishes and TimeToLiveHFileCleaner is 
> triggered. If TTL has passed since the store_file_A was moved to archive 
> (SnapshotHFileCleaner could take easily several minutes to go through rest of 
> the files), the TimeToLiveHFileCleaner also marks the file as deletable.
> * Since all cleaner plugins marked file as deletable, the store_file_A is 
> deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-18 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091415#comment-16091415
 ] 

Phil Yang commented on HBASE-18390:
---

Any other concerns? Thanks 

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-18367:

Attachment: HBASE-18367.003.patch

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch, 
> HBASE-18367.003.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-07-18 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091398#comment-16091398
 ] 

ramkrishna.s.vasudevan commented on HBASE-17819:


bq.Now CompactedHFilesDischarger seems not considering this config!
Even previously when a compacted file was being closed those blocks were 
forcefully evicted. 
The above configuration ''hbase.rs.evictblocksonclose'' was mainly for when a 
HStore is closed and not when a compacted file was getting closed. If we need 
this behaviour we can see how can be added to CompactedHFilesDischarger but it 
was not something that was missed.

> Reduce the heap overhead for BucketCache
> 
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> We keep Bucket entry map in BucketCache.  Below is the math for heapSize for 
> the key , value into this map.
> BlockCacheKey
> ---
> String hfileName  -  Ref  - 4
> long offset  - 8
> BlockType blockType  - Ref  - 4
> boolean isPrimaryReplicaBlock  - 1
> Total  =  12 (Object) + 17 = 29
> BucketEntry
> 
> int offsetBase  -  4
> int length  - 4
> byte offset1  -  1
> byte deserialiserIndex  -  1
> long accessCounter  -  8
> BlockPriority priority  - Ref  - 4
> volatile boolean markedForEvict  -  1
> AtomicInteger refCount  -  16 + 4
> long cachedTime  -  8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry  -  40
> blocksByHFile ConcurrentSkipListSet Entry  -  40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.  
> This jira aims to reduce this as much as possible



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18403) [Shell]Truncate permission required

2017-07-18 Thread Yun Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Zhao updated HBASE-18403:
-
Attachment: HBASE-18403.patch

> [Shell]Truncate permission required
> ---
>
> Key: HBASE-18403
> URL: https://issues.apache.org/jira/browse/HBASE-18403
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yun Zhao
>Priority: Trivial
> Attachments: HBASE-18403.patch
>
>
> When a user has only (Create) permission to execute truncate, the table will 
> be deleted and not re-created



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Balazs Meszaros (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Balazs Meszaros updated HBASE-18367:

Attachment: HBASE-18367.003.patch

> Reduce ProcedureInfo usage
> --
>
> Key: HBASE-18367
> URL: https://issues.apache.org/jira/browse/HBASE-18367
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
> Fix For: 2.0.0
>
> Attachments: HBASE-18367.001.patch, HBASE-18367.002.patch, 
> HBASE-18367.003.patch
>
>
> If we want to replace ProcedureInfo objects with jsons (HBASE-18106) we have 
> to reduce ProcedureInfo usage. Currently it is used several places in the 
> code where it could be replaced with Procedure (e.g. ProcedureExecutor). We 
> should use ProcedureInfo only for the communication before removing it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-18 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091488#comment-16091488
 ] 

Duo Zhang commented on HBASE-18390:
---

+1.

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch, 
> HBASE-18390.v03.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18403) [Shell]Truncate permission required

2017-07-18 Thread Yun Zhao (JIRA)
Yun Zhao created HBASE-18403:


 Summary: [Shell]Truncate permission required
 Key: HBASE-18403
 URL: https://issues.apache.org/jira/browse/HBASE-18403
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Yun Zhao
Priority: Trivial


When a user has only (Create) permission to execute truncate, the table will be 
deleted and not re-created



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18403) [Shell]Truncate permission required

2017-07-18 Thread Yun Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Zhao updated HBASE-18403:
-
Status: Patch Available  (was: Open)

> [Shell]Truncate permission required
> ---
>
> Key: HBASE-18403
> URL: https://issues.apache.org/jira/browse/HBASE-18403
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yun Zhao
>Priority: Trivial
> Attachments: HBASE-18403.patch
>
>
> When a user has only (Create) permission to execute truncate, the table will 
> be deleted and not re-created



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18389) Remove byte[] from formal parameter of sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-18 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091711#comment-16091711
 ] 

Xiang Li commented on HBASE-18389:
--

Hi [~chia7712], thanks very much for your review and comments!

1. Regarding the links in the javadoc, corrected in patch 001 (will upload it 
shortly after). Sorry, my bad, I did not bring in the package name of KeyValue, 
so it could not be addressed. Corrected.

2. Regarding 
bq. Is it better to use the name "sizeOfByteArray" to replace 
"sizeOfPartOfByteArray" ?
Thanks for the suggestion! Yes, the function is to calculate the size of a byte 
array. But the reason why I added "PartOf" into the function name is to 
highlight that this function is not to calculate the whole byte array, but only 
a part of it, while sizeOf(byte[] b) is to calculate the whole byte array 
specifed as byte[] b. Does it make sense to you?

> Remove byte[] from formal parameter of sizeOf() of ClassSize, 
> ClassSize.MemoryLayout and ClassSize.UnsafeLayout
> ---
>
> Key: HBASE-18389
> URL: https://issues.apache.org/jira/browse/HBASE-18389
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18389.master.000.patch
>
>
> In ClassSize class and its internal static class, sizeOf() function has 2 
> formal parameters: byte[] b and int len. But the function's internal logic 
> does not use or refer to byte[] b. Could be removed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java|borderStyle=solid}
> // Class of ClassSize
> public static long sizeOf(byte[] b, int len) {
>   return memoryLayout.sizeOf(b, len);
> }
> // Class of ClassSize.MemoryLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len);
> }
> // Class of ClassSize.UnsafeLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len * 
> UnsafeAccess.theUnsafe.ARRAY_BYTE_INDEX_SCALE);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18086) Create native client which creates load on selected cluster

2017-07-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091726#comment-16091726
 ] 

Ted Yu commented on HBASE-18086:


Running load-client against 1.1 cluster, the client hung at the end of write 
phase:
{code}
I0718 15:20:27.652695  9636 load-client.cc:260] joining thread 7
I0718 15:20:27.652781  9636 load-client.cc:262] joined thread 7
I0718 15:20:27.652876  9636 load-client.cc:260] joining thread 8
2017-07-18 15:20:30,545:9636(0x7f9ee50ad700):ZOO_WARN@zookeeper_interest@1570: 
Exceeded deadline by 13ms
2017-07-18 15:20:43,893:9636(0x7f9ee50ad700):ZOO_WARN@zookeeper_interest@1570: 
Exceeded deadline by 13ms
{code}
Attempt to attach gdb to the hanging process encountered:
{code}
Attaching to process 9636
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
{code}
Even after modifying /etc/sysctl.d/10-ptrace.conf , I still got the same error.

> Create native client which creates load on selected cluster
> ---
>
> Key: HBASE-18086
> URL: https://issues.apache.org/jira/browse/HBASE-18086
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 18086.v11.txt, 18086.v12.txt, 18086.v14.txt, 
> 18086.v1.txt, 18086.v3.txt, 18086.v4.txt, 18086.v5.txt, 18086.v6.txt, 
> 18086.v7.txt, 18086.v8.txt
>
>
> This task is to create a client which uses multiple threads to conduct Puts 
> followed by Gets against selected cluster.
> Default is to run the tool against local cluster.
> This would give us some idea on the characteristics of native client in terms 
> of handling high load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18367) Reduce ProcedureInfo usage

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091630#comment-16091630
 ] 

Hadoop QA commented on HBASE-18367:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
10s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}118m 
12s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18367 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877783/HBASE-18367.003.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8f9dab9fc8fb 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0c2915b4 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7696/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
|  Test Results | 

[jira] [Updated] (HBASE-18389) Remove byte[] from formal parameter of sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-18 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18389:
-
Status: Patch Available  (was: Open)

> Remove byte[] from formal parameter of sizeOf() of ClassSize, 
> ClassSize.MemoryLayout and ClassSize.UnsafeLayout
> ---
>
> Key: HBASE-18389
> URL: https://issues.apache.org/jira/browse/HBASE-18389
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18389.master.000.patch, 
> HBASE-18389.master.001.patch
>
>
> In ClassSize class and its internal static class, sizeOf() function has 2 
> formal parameters: byte[] b and int len. But the function's internal logic 
> does not use or refer to byte[] b. Could be removed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java|borderStyle=solid}
> // Class of ClassSize
> public static long sizeOf(byte[] b, int len) {
>   return memoryLayout.sizeOf(b, len);
> }
> // Class of ClassSize.MemoryLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len);
> }
> // Class of ClassSize.UnsafeLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len * 
> UnsafeAccess.theUnsafe.ARRAY_BYTE_INDEX_SCALE);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18389) Remove byte[] from formal parameter of sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-18 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18389:
-
Attachment: HBASE-18389.master.001.patch

Uploaded patch 001 to address the link errors in javadoc according to 
[~chia7712]'s comments.

> Remove byte[] from formal parameter of sizeOf() of ClassSize, 
> ClassSize.MemoryLayout and ClassSize.UnsafeLayout
> ---
>
> Key: HBASE-18389
> URL: https://issues.apache.org/jira/browse/HBASE-18389
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18389.master.000.patch, 
> HBASE-18389.master.001.patch
>
>
> In ClassSize class and its internal static class, sizeOf() function has 2 
> formal parameters: byte[] b and int len. But the function's internal logic 
> does not use or refer to byte[] b. Could be removed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java|borderStyle=solid}
> // Class of ClassSize
> public static long sizeOf(byte[] b, int len) {
>   return memoryLayout.sizeOf(b, len);
> }
> // Class of ClassSize.MemoryLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len);
> }
> // Class of ClassSize.UnsafeLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len * 
> UnsafeAccess.theUnsafe.ARRAY_BYTE_INDEX_SCALE);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18389) Remove byte[] from formal parameter of sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-18 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091817#comment-16091817
 ] 

Chia-Ping Tsai commented on HBASE-18389:


bq.  Does it make sense to you?
It makes sense to me.
{noformat}
+  public static long sizeOfPartOfByteArray(int len) {
+return memoryLayout.sizeOfByteArray(len);
   }
{noformat}
Should we make these methods have same name?

{noformat}
[WARNING] 
/testptch/hbase/hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java:476:
 warning - Missing closing '}' character for inline tag: "{@link 
#sizeOfPartOfByteArray(int) instead."
{noformat}
Please fix the doc.


> Remove byte[] from formal parameter of sizeOf() of ClassSize, 
> ClassSize.MemoryLayout and ClassSize.UnsafeLayout
> ---
>
> Key: HBASE-18389
> URL: https://issues.apache.org/jira/browse/HBASE-18389
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-18389.master.000.patch, 
> HBASE-18389.master.001.patch
>
>
> In ClassSize class and its internal static class, sizeOf() function has 2 
> formal parameters: byte[] b and int len. But the function's internal logic 
> does not use or refer to byte[] b. Could be removed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java|borderStyle=solid}
> // Class of ClassSize
> public static long sizeOf(byte[] b, int len) {
>   return memoryLayout.sizeOf(b, len);
> }
> // Class of ClassSize.MemoryLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len);
> }
> // Class of ClassSize.UnsafeLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len * 
> UnsafeAccess.theUnsafe.ARRAY_BYTE_INDEX_SCALE);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18400) [C++] ConnectionId Equals/Hash should consider service_name

2017-07-18 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091820#comment-16091820
 ] 

Xiaobing Zhou commented on HBASE-18400:
---

Hi [~tedyu], thanks for your review.

The last 8 lines are just using scope to avoid duplicate variables (e.g. 
remote_id) declarations, but call GetConnection four times so that 
ConnectionFactory::Connect and ConnectionFactory::MakeBootstrap are invoked 
only two times, which is the semantics of connection pool.

> [C++] ConnectionId Equals/Hash should consider service_name
> ---
>
> Key: HBASE-18400
> URL: https://issues.apache.org/jira/browse/HBASE-18400
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18400.000.patch
>
>
> Currently only security::User, host and port are taken into account in the 
> implementation of ConnectionIdEquals and ConnectionIdHash. It makes sense to 
> allocate dedicated RPC connection for a specific service, so service_name 
> should be added to implementation;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18389) Remove byte[] from formal parameter of sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091777#comment-16091777
 ] 

Hadoop QA commented on HBASE-18389:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 51s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hbase-common generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
13s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 6s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18389 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877824/HBASE-18389.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 08f8e6ba57cd 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 56d00f5 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7700/artifact/patchprocess/diff-javadoc-javadoc-hbase-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7700/testReport/ |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7700/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Remove byte[] from formal parameter of sizeOf() of ClassSize, 
> ClassSize.MemoryLayout and ClassSize.UnsafeLayout
> 

[jira] [Commented] (HBASE-18400) [C++] ConnectionId Equals/Hash should consider service_name

2017-07-18 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091827#comment-16091827
 ] 

Xiaobing Zhou commented on HBASE-18400:
---

Forgot to mention EXPECT_CALL is the assertion.

> [C++] ConnectionId Equals/Hash should consider service_name
> ---
>
> Key: HBASE-18400
> URL: https://issues.apache.org/jira/browse/HBASE-18400
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18400.000.patch
>
>
> Currently only security::User, host and port are taken into account in the 
> implementation of ConnectionIdEquals and ConnectionIdHash. It makes sense to 
> allocate dedicated RPC connection for a specific service, so service_name 
> should be added to implementation;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18400) [C++] ConnectionId Equals/Hash should consider service_name

2017-07-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091831#comment-16091831
 ] 

Ted Yu commented on HBASE-18400:


lgtm

> [C++] ConnectionId Equals/Hash should consider service_name
> ---
>
> Key: HBASE-18400
> URL: https://issues.apache.org/jira/browse/HBASE-18400
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18400.000.patch
>
>
> Currently only security::User, host and port are taken into account in the 
> implementation of ConnectionIdEquals and ConnectionIdHash. It makes sense to 
> allocate dedicated RPC connection for a specific service, so service_name 
> should be added to implementation;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18392) Add default value of ----movetimeout to rolling-restart.sh

2017-07-18 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091873#comment-16091873
 ] 

Samir Ahmic commented on HBASE-18392:
-

Thank you [~tedyu].

> Add default value of movetimeout to rolling-restart.sh
> --
>
> Key: HBASE-18392
> URL: https://issues.apache.org/jira/browse/HBASE-18392
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
> Fix For: 3.0.0
>
> Attachments: HBASE-18392-master-001.patch
>
>
> We are calling graceful_stop.sh in rolling-restart.sh with following line 
> {code}
> "$bin"/graceful_stop.sh --config ${HBASE_CONF_DIR} --restart --reload -nob 
> --maxthreads  \
> ${RR_MAXTHREADS} ${RR_NOACK} --movetimeout ${RR_MOVE_TIMEOUT} 
> $hostname
> {code} 
> and if we not specified --movetimeout option while calling rolling-restart.sh 
>  --graceful script will not work. My propose is to add default value for this 
> parameter same way we are doing in graceful_stop.sh



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-14135) HBase Backup/Restore Phase 3: Merge backup images

2017-07-18 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091989#comment-16091989
 ] 

Vladimir Rodionov edited comment on HBASE-14135 at 7/18/17 6:52 PM:


Patch v7 add some UT with merge failures

cc: [~te...@apache.org]


was (Author: vrodionov):
Patch v7 add some UT with merge failures

> HBase Backup/Restore Phase 3: Merge backup images
> -
>
> Key: HBASE-14135
> URL: https://issues.apache.org/jira/browse/HBASE-14135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-14135-v3.patch, HBASE-14135-v5.patch, 
> HBASE-14135-v6.patch, HBASE-14135-v7.patch
>
>
> User can merge incremental backup images into single incremental backup image.
> # Merge supports only incremental images
> # Merge supports only images for the same backup destinations
> Command:
> {code}
> hbase backup merge image1,image2,..imageK
> {code}
> Example:
> {code}
> hbase backup merge backup_143126764557,backup_143126764456 
> {code}
> When operation is complete, only the most recent backup image will be kept 
> (in above example -  backup_143126764557) as a merged backup image, all other 
> images will be deleted from both: file system and backup system tables, 
> corresponding backup manifest for the merged backup image will be updated to 
> remove dependencies from deleted images. Merged backup image will contains 
> all the data from original image and from deleted images.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-14135) HBase Backup/Restore Phase 3: Merge backup images

2017-07-18 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14135:
--
Attachment: HBASE-14135-v7.patch

Patch v7 add some UT with merge failures

> HBase Backup/Restore Phase 3: Merge backup images
> -
>
> Key: HBASE-14135
> URL: https://issues.apache.org/jira/browse/HBASE-14135
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBASE-14135-v3.patch, HBASE-14135-v5.patch, 
> HBASE-14135-v6.patch, HBASE-14135-v7.patch
>
>
> User can merge incremental backup images into single incremental backup image.
> # Merge supports only incremental images
> # Merge supports only images for the same backup destinations
> Command:
> {code}
> hbase backup merge image1,image2,..imageK
> {code}
> Example:
> {code}
> hbase backup merge backup_143126764557,backup_143126764456 
> {code}
> When operation is complete, only the most recent backup image will be kept 
> (in above example -  backup_143126764557) as a merged backup image, all other 
> images will be deleted from both: file system and backup system tables, 
> corresponding backup manifest for the merged backup image will be updated to 
> remove dependencies from deleted images. Merged backup image will contains 
> all the data from original image and from deleted images.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17544) Expose metrics for the CatalogJanitor

2017-07-18 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092083#comment-16092083
 ] 

Esteban Gutierrez commented on HBASE-17544:
---

My bad [~apurtell]. I was on vacations I will try to get something here in the 
next few days. Thanks!

> Expose metrics for the CatalogJanitor
> -
>
> Key: HBASE-17544
> URL: https://issues.apache.org/jira/browse/HBASE-17544
> Project: HBase
>  Issue Type: Improvement
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
>
> Currently there is other way to know what the CatalogJanitor is doing except 
> in the logs. We should have better visibility of when it was the last time 
> the CatalogJanitor ran, how long it took to scan meta, the number of merged 
> and parent regions cleaned on the last run, and if in maintenance mode (see 
> HBASE-16008). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module

2017-07-18 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092207#comment-16092207
 ] 

Mike Drob commented on HBASE-18405:
---

If we have multiple modules for a cartesian product of spark and scala 
versions, do we need corresponding IT modules for each of them? Can we run the 
same ITs several times in a matrix within maven? Do we want to reserve that for 
a profile? What do we expect daily developers to do, what do we expect RMs to 
do?

bq. I thought that's what I read. I also thought works-correctly-enough made it 
into 2.0. lemme go jira hunting.
SPARK-14743 is the one I'm thinking of.

bq. avoid a "we'll do it later" that turns into not getting done. Worth the 
trouble?
If you mark it a blocker for 2.0, it should get done, no?

bq. the integration test already exists, so I didn't keep it in a milestone 
(barring something showing up in milestone 0's results).
_One_ integration test exists. Do you think this provides sufficient coverage?
Would love to see the IT get run with monkeys also, not sure if it currently 
supports that.

bq. Maybe add a milestone after 4 that's "ensure tests on asf jenkins" that 
includes the precommit updates?
Didn't you already do this?


Scaladoc update is missing from milestones. Likely M4.


> Track scope for HBase-Spark module
> --
>
> Key: HBASE-18405
> URL: https://issues.apache.org/jira/browse/HBASE-18405
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: Apache HBase - Apache Spark Integration Scope.pdf
>
>
> Start with [\[DISCUSS\]  status of and plans for our hbase-spark integration 
> |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E]
>  and formalize into a scope document for bringing this feature into a release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18147) nightly job to check health of active branches

2017-07-18 Thread Alex Leblang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092137#comment-16092137
 ] 

Alex Leblang commented on HBASE-18147:
--

lgtm, I didn't see any issues

> nightly job to check health of active branches
> --
>
> Key: HBASE-18147
> URL: https://issues.apache.org/jira/browse/HBASE-18147
> Project: HBase
>  Issue Type: Test
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-18147.0.patch, HBASE-18147-branch-1.v1.patch, 
> HBASE-18147.v1.patch
>
>
> We should set up a job that runs Apache Yetus Test Patch's nightly mode. 
> Essentially, it produces a report that considers how the branch measures up 
> against the things we check in our precommit checks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module

2017-07-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092186#comment-16092186
 ] 

Sean Busbey commented on HBASE-18405:
-

thanks for the quick turn around time on review!

bq. Module Layout should address hbase-spark-it module, I think.

I tried to get this across without specifying the actual modules used in 
implementation. It meets all the requirements: is a new module, depends on old 
modules, only shows up as a reactor module and in the assembly. Or do you mean 
I need to call out that the test stuff is subject to the same constraints? 
Should I just go ahead an prescribe a layout?

bq. Did we actaully determine on the DISCUSS thread that we must support Spark 
2.0+? We might have to take a 2.1+ position, since I think there was some 
updates to work with Delegation Tokens that didn't make it in right away and 
another goal is to have use case parity for secure deployments.

I thought that's what I read. I also thought works-correctly-enough made it 
into 2.0. lemme go jira hunting.

bq. Milestone 5 backport could happen before Milestone 4 documentation is done. 
You said that docs get updated closer to release time anyway in another thread 
on list.

Good point. I think the main reason I would like to list it first is to avoid a 
"we'll do it later" that turns into not getting done. Worth the trouble?

bq. Integration Tests and Yetus tests are not included in any milestones. 
Combine with Unit Test M2?

the integration test already exists, so I didn't keep it in a milestone 
(barring something showing up in milestone 0's results).

milestone 1 covers the yetus changes, though it occurs to me now that it 
references docs that won't exist until milestone 4.

Milestone 2 I meant to cover "write tests" basically. Maybe add a milestone 
after 4 that's "ensure tests on asf jenkins" that includes the precommit 
updates?

> Track scope for HBase-Spark module
> --
>
> Key: HBASE-18405
> URL: https://issues.apache.org/jira/browse/HBASE-18405
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: Apache HBase - Apache Spark Integration Scope.pdf
>
>
> Start with [\[DISCUSS\]  status of and plans for our hbase-spark integration 
> |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E]
>  and formalize into a scope document for bringing this feature into a release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-14135) HBase Backup/Restore Phase 3: Merge backup images

2017-07-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092191#comment-16092191
 ] 

Hadoop QA commented on HBASE-14135:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
45s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 39s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 40s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay |
|   | org.apache.hadoop.hbase.master.procedure.TestDisableTableProcedure |
|   | org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface |
|   | org.apache.hadoop.hbase.regionserver.TestSplitLogWorker |
|   | org.apache.hadoop.hbase.regionserver.wal.TestAsyncWALReplay |
|   | org.apache.hadoop.hbase.regionserver.TestRowTooBig |
|   | org.apache.hadoop.hbase.master.procedure.TestServerCrashProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestEnableTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestCreateTableProcedure |
|   | org.apache.hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy 
|
|   | org.apache.hadoop.hbase.mapreduce.TestTableInputFormat |
|   | org.apache.hadoop.hbase.mapreduce.TestHRegionPartitioner |
|   | org.apache.hadoop.hbase.master.TestGetLastFlushedSequenceId |
|   | org.apache.hadoop.hbase.master.procedure.TestSafemodeBringsDownMaster |
|   | org.apache.hadoop.hbase.backup.TestRemoteBackup |
|   | org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries |
|   | org.apache.hadoop.hbase.regionserver.TestRemoveRegionMetrics |
|   | org.apache.hadoop.hbase.trace.TestHTraceHooks |
|   | org.apache.hadoop.hbase.TestHBaseTestingUtility |
|   | org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook |
|   | org.apache.hadoop.hbase.regionserver.TestTags |
|   | 

[jira] [Commented] (HBASE-18400) [C++] ConnectionId Equals/Hash should consider service_name

2017-07-18 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092200#comment-16092200
 ] 

Enis Soztutar commented on HBASE-18400:
---

Belated +1. 

> [C++] ConnectionId Equals/Hash should consider service_name
> ---
>
> Key: HBASE-18400
> URL: https://issues.apache.org/jira/browse/HBASE-18400
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: HBASE-14850
>
> Attachments: HBASE-18400.000.patch
>
>
> Currently only security::User, host and port are taken into account in the 
> implementation of ConnectionIdEquals and ConnectionIdHash. It makes sense to 
> allocate dedicated RPC connection for a specific service, so service_name 
> should be added to implementation;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18405) Track scope for HBase-Spark module

2017-07-18 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-18405:
---

 Summary: Track scope for HBase-Spark module
 Key: HBASE-18405
 URL: https://issues.apache.org/jira/browse/HBASE-18405
 Project: HBase
  Issue Type: Task
  Components: spark
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 1.4.0, 2.0.0-beta-1


Start with [\[DISCUSS\]  status of and plans for our hbase-spark integration 
|https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E]
 and formalize into a scope document for bringing this feature into a release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18404) Small typo on ACID documentation page

2017-07-18 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091945#comment-16091945
 ] 

Dima Spivak commented on HBASE-18404:
-

Wanna take it on, [~mcrutcher]? The file to change is 
{{src/main/site/asciidoc/acid-semantics.adoc}} and it would be wholly 
appreciated. :-p

> Small typo on ACID documentation page
> -
>
> Key: HBASE-18404
> URL: https://issues.apache.org/jira/browse/HBASE-18404
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.3.1
>Reporter: Michael Crutcher
>Priority: Trivial
>
> I noticed a couple of occurrences of the "word" wholely on the ACID semantics 
> doc page (https://hbase.apache.org/acid-semantics.html)
> This should be "wholly".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18401) Region Replica shows up in meta table after split in branch-1

2017-07-18 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-18401:
-
Affects Version/s: 2.0.0-alpha-1

> Region Replica shows up in meta table after split in branch-1
> -
>
> Key: HBASE-18401
> URL: https://issues.apache.org/jira/browse/HBASE-18401
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18401-branch-1.2-v001.patch, 
> HBASE-18401-branch-1.2-v002.patch
>
>
> Read replica is broken in branch-1, after the region split, we saw replica 
> region as content in hbase:meta, while the previous behavior is that replica 
> region should not show up in info:regioninfo.
> {code}
> t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:regioninfo, 
> timestamp=1500340472406, value={ENCODED => d8faa669dde775c323f6e55fd5aa36e0, 
> NAME => 't1,r2111,1500340472229_0001.d8faa669dde7
>  e73ca. 75c323f6e55fd5aa36e0.', 
> STARTKEY => 'r2111', ENDKEY => 'r2', REPLICA_ID => 1} 
> 
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen, timestamp=1500340472379, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
>  
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen_0001, timestamp=1500340472406, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server, 
> timestamp=1500340472379, value=dhcp-172-16-1-203.pa.cloudera.com:59105
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server_0001, 
> timestamp=1500340472406, value=dhcp-172-16-1-203.pa.cloudera.com:59105
>
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode, timestamp=1500340472379, value=1500340443589 
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode_0001, timestamp=1500340472406, 
> value=\x00\x00\x01]SBY\xC5
> {code}
> This was introduced by 
> https://github.com/apache/hbase/blame/branch-1-HBASE-18147/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1464
> It does not consider that case that regionInfo could come from a replica 
> region.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18404) Small typo on ACID documentation page

2017-07-18 Thread Michael Crutcher (JIRA)
Michael Crutcher created HBASE-18404:


 Summary: Small typo on ACID documentation page
 Key: HBASE-18404
 URL: https://issues.apache.org/jira/browse/HBASE-18404
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.3.1
Reporter: Michael Crutcher
Priority: Trivial


I noticed a couple of occurrences of the "word" wholely on the ACID semantics 
doc page (https://hbase.apache.org/acid-semantics.html)

This should be "wholly".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module

2017-07-18 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092135#comment-16092135
 ] 

Mike Drob commented on HBASE-18405:
---

Module Layout should address hbase-spark-it module, I think.

Did we actaully determine on the DISCUSS thread that we must support Spark 
2.0+? We might have to take a 2.1+ position, since I think there was some 
updates to work with Delegation Tokens that didn't make it in right away and 
another goal is to have use case parity for secure deployments.

Milestone 1 summary is missing a word at the end?

Milestone 5 backport could happen before Milestone 4 documentation is done. You 
said that docs get updated closer to release time anyway in another thread on 
list.

Integration Tests and Yetus tests are not included in any milestones. Combine 
with Unit Test M2?

> Track scope for HBase-Spark module
> --
>
> Key: HBASE-18405
> URL: https://issues.apache.org/jira/browse/HBASE-18405
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: Apache HBase - Apache Spark Integration Scope.pdf
>
>
> Start with [\[DISCUSS\]  status of and plans for our hbase-spark integration 
> |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E]
>  and formalize into a scope document for bringing this feature into a release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18400) [C++] ConnectionId Equals/Hash should consider service_name

2017-07-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18400:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HBASE-14850
   Status: Resolved  (was: Patch Available)

> [C++] ConnectionId Equals/Hash should consider service_name
> ---
>
> Key: HBASE-18400
> URL: https://issues.apache.org/jira/browse/HBASE-18400
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: HBASE-14850
>
> Attachments: HBASE-18400.000.patch
>
>
> Currently only security::User, host and port are taken into account in the 
> implementation of ConnectionIdEquals and ConnectionIdHash. It makes sense to 
> allocate dedicated RPC connection for a specific service, so service_name 
> should be added to implementation;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18406) In ServerCrashProcedure.java start(MasterProcedureEnv) is a no-op

2017-07-18 Thread Alex Leblang (JIRA)
Alex Leblang created HBASE-18406:


 Summary: In ServerCrashProcedure.java start(MasterProcedureEnv) is 
a no-op
 Key: HBASE-18406
 URL: https://issues.apache.org/jira/browse/HBASE-18406
 Project: HBase
  Issue Type: Bug
Reporter: Alex Leblang


The comments above this method explain that it exists to set configs and 
return, however, no configs are set in the method.  

As you can see here:
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L210-L214
 

It is only ever called here:
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L142



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module

2017-07-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092214#comment-16092214
 ] 

Sean Busbey commented on HBASE-18405:
-

{quote}
If we have multiple modules for a cartesian product of spark and scala 
versions, do we need corresponding IT modules for each of them? Can we run the 
same ITs several times in a matrix within maven? Do we want to reserve that for 
a profile? What do we expect daily developers to do, what do we expect RMs to 
do?
{quote}

oh! okay yeah that's a big gap. Thanks!

> Track scope for HBase-Spark module
> --
>
> Key: HBASE-18405
> URL: https://issues.apache.org/jira/browse/HBASE-18405
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: Apache HBase - Apache Spark Integration Scope.pdf
>
>
> Start with [\[DISCUSS\]  status of and plans for our hbase-spark integration 
> |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E]
>  and formalize into a scope document for bringing this feature into a release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18406) In ServerCrashProcedure.java start(MasterProcedureEnv) is a no-op

2017-07-18 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe reassigned HBASE-18406:


Assignee: Alex Leblang

> In ServerCrashProcedure.java start(MasterProcedureEnv) is a no-op
> -
>
> Key: HBASE-18406
> URL: https://issues.apache.org/jira/browse/HBASE-18406
> Project: HBase
>  Issue Type: Bug
>Reporter: Alex Leblang
>Assignee: Alex Leblang
>
> The comments above this method explain that it exists to set configs and 
> return, however, no configs are set in the method.  
> As you can see here:
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L210-L214
>  
> It is only ever called here:
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerCrashProcedure.java#L142



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17908) Upgrade guava

2017-07-18 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17908:
--
Attachment: HBASE-17908.master.026.patch

> Upgrade guava
> -
>
> Key: HBASE-17908
> URL: https://issues.apache.org/jira/browse/HBASE-17908
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: Balazs Meszaros
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-HBASE-17908-Upgrade-guava.022.patch, 
> HBASE-17908.master.001.patch, HBASE-17908.master.002.patch, 
> HBASE-17908.master.003.patch, HBASE-17908.master.004.patch, 
> HBASE-17908.master.005.patch, HBASE-17908.master.006.patch, 
> HBASE-17908.master.007.patch, HBASE-17908.master.008.patch, 
> HBASE-17908.master.009.patch, HBASE-17908.master.010.patch, 
> HBASE-17908.master.011.patch, HBASE-17908.master.012.patch, 
> HBASE-17908.master.013.patch, HBASE-17908.master.013.patch, 
> HBASE-17908.master.014.patch, HBASE-17908.master.015.patch, 
> HBASE-17908.master.015.patch, HBASE-17908.master.016.patch, 
> HBASE-17908.master.017.patch, HBASE-17908.master.018.patch, 
> HBASE-17908.master.019.patch, HBASE-17908.master.020.patch, 
> HBASE-17908.master.021.patch, HBASE-17908.master.021.patch, 
> HBASE-17908.master.022.patch, HBASE-17908.master.023.patch, 
> HBASE-17908.master.024.patch, HBASE-17908.master.025.patch, 
> HBASE-17908.master.026.patch
>
>
> Currently we are using guava 12.0.1, but the latest version is 21.0. 
> Upgrading guava is always a hassle because it is not always backward 
> compatible with itself.
> Currently I think there are to approaches:
> 1. Upgrade guava to the newest version (21.0) and shade it.
> 2. Upgrade guava to a version which does not break or builds (15.0).
> If we can update it, some dependencies should be removed: 
> commons-collections, commons-codec, ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18186) Frequent FileNotFoundExceptions in region server logs

2017-07-18 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092292#comment-16092292
 ] 

Mikhail Antonov commented on HBASE-18186:
-

[~ram_krish] oh I see, I have missed that part and I thought this is a slightly 
different scenario. That means we're back to square one on that..

> Frequent FileNotFoundExceptions in region server logs
> -
>
> Key: HBASE-18186
> URL: https://issues.apache.org/jira/browse/HBASE-18186
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction, Scanners
>Affects Versions: 1.3.1
>Reporter: Ashu Pachauri
>
> We see frequent FileNotFoundException in regionserver logs on multiple code 
> paths trying to reference non existing store files. I know that there have 
> been multiple bugs in store file accounting of compacted store files. 
> Examples include: HBASE-16964 , HBASE-16754 and HBASE-16788.
> Observations:  
> 1. The issue mentioned here also seems to bear a similar flavor, because we 
> are not seeing rampant dataloss given the frequency of these exceptions in 
> the logs. So, it's more likely an accounting issue, but I could be wrong. 
> 2. The frequency with which this happens on scan heavy workload is at least 
> one order of magnitude higher than a mixed workload.
> Stack traces:
> {Code}
> WARN backup.HFileArchiver: Failed to archive class 
> org.apache.hadoop.hbase.backup.HFileArchiver$FileableStoreFile, 
> file:hdfs: because it does not exist! 
> Skipping and continuing on.
> java.io.FileNotFoundException: File/Directory // 
> does not exist.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setTimes(FSDirAttrOp.java:121)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1910)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:1223)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:915)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
>   at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>   at org.apache.hadoop.hdfs.DFSClient.setTimes(DFSClient.java:3115)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1520)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$30.doCall(DistributedFileSystem.java:1516)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.setTimes(DistributedFileSystem.java:1530)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.setTimes(FilterFileSystem.java:496)
>   at 
> org.apache.hadoop.hbase.util.FSUtils.renameAndSetModifyTime(FSUtils.java:1805)
>   at 
> org.apache.hadoop.hbase.backup.HFileArchiver$File.moveAndClose(HFileArchiver.java:575)
>   at 
> org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchiveFile(HFileArchiver.java:410)
>   at 
> org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:320)
>   at 
> org.apache.hadoop.hbase.backup.HFileArchiver.archiveStoreFiles(HFileArchiver.java:242)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.removeStoreFiles(HRegionFileSystem.java:433)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.removeCompactedfiles(HStore.java:2723)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.closeAndArchiveCompactedFiles(HStore.java:2672)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactedHFilesDischargeHandler.process(CompactedHFilesDischargeHandler.java:43)
>   at 

[jira] [Updated] (HBASE-18409) Migrate Client Metrics from codahale to hbase-metrics

2017-07-18 Thread Ronald Macmaster (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronald Macmaster updated HBASE-18409:
-
Description: 
Currently, the metrics for hbase-client are tailored for reporting via a 
client-side JMX server.
The MetricsConnection handles the metrics management and reporting via the 
metrics platform from codahale. 
This approach worked well for hbase-1.3.1 when the metrics platform was still 
relatively young, but it could be improved by using the new hbase-metrics-api. 

Now that we have an actual hbase-metrics-api that master, regionserver, 
zookeeper, and other daemons use, it would be good to also allow the client to 
leverage the metrics-api. 
Then, the client could also report its metrics via Hadoop's metrics2 if desired 
or through another platform that utilizes the hbase-metrics-api. 
If left alone, client metrics will continue to be only barely visible through a 
client-side JMX server.

The migration to the new metrics-api could be done by simply changing the 
Metrics data types from codahale types to hbase-metrics types without changing 
the metrics signatures of MetricsConnection unless completely necessary. 
The codahale MetricsRegistry would also have to be exchanged for a 
hbase-metrics MetricsRegistry. 

I found this to be a necessary change after attempting to implement my own 
Reporter to use within the MetricsConnection class.
I was attempting to create a HadoopMetrics2Reporter that extends the codahale 
ScheduledReporter and reports the MetricsConnection metrics to Hadoop's 
metrics2 system. 
The already existing infrastructure in the hbase-metrics and hbase-metrics-api 
projects could be easily leveraged for a cleaner solution.
If completed successfully, users could instead access their client-side metrics 
through the hbase-metrics-api. 


  was:
Currently, the metrics for hbase-client are tailored for reporting via a 
client-side JMX server.
The MetricsConnection handles the metrics management and reporting via the 
metrics platform from codahale. This approach worked well for hbase-1.3.1 when 
the metrics platform was still relatively young, but it could be improved by 
using the new hbase-metrics-api. 

However, now that we have an actual hbase-metrics-api that master, 
regionserver, zookeeper, and others use, it would be good to also allow the 
client to leverage the metrics-api. Then, the client could also report its 
metrics via Hadoop's metrics2 if desired or through another platform that 
utilizes the hbase-metrics-api. If left alone, client metrics will continue to 
be only barely visible through a client-side JMX server.

The migration to the new metrics-api could be done by simply changing the 
Metrics data types from codahale types to hbase-metrics types without changing 
the metrics signatures of MetricsConnection unless completely necessary. The 
codahale MetricsRegistry would also have to be exchanged for a hbase-metrics 
MetricsRegistry. 



> Migrate Client Metrics from codahale to hbase-metrics
> -
>
> Key: HBASE-18409
> URL: https://issues.apache.org/jira/browse/HBASE-18409
> Project: HBase
>  Issue Type: Bug
>  Components: Client, java, metrics
>Affects Versions: 2.0.0-alpha-1
>Reporter: Ronald Macmaster
>  Labels: newbie
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Currently, the metrics for hbase-client are tailored for reporting via a 
> client-side JMX server.
> The MetricsConnection handles the metrics management and reporting via the 
> metrics platform from codahale. 
> This approach worked well for hbase-1.3.1 when the metrics platform was still 
> relatively young, but it could be improved by using the new 
> hbase-metrics-api. 
> Now that we have an actual hbase-metrics-api that master, regionserver, 
> zookeeper, and other daemons use, it would be good to also allow the client 
> to leverage the metrics-api. 
> Then, the client could also report its metrics via Hadoop's metrics2 if 
> desired or through another platform that utilizes the hbase-metrics-api. 
> If left alone, client metrics will continue to be only barely visible through 
> a client-side JMX server.
> The migration to the new metrics-api could be done by simply changing the 
> Metrics data types from codahale types to hbase-metrics types without 
> changing the metrics signatures of MetricsConnection unless completely 
> necessary. 
> The codahale MetricsRegistry would also have to be exchanged for a 
> hbase-metrics MetricsRegistry. 
> I found this to be a necessary change after attempting to implement my own 
> Reporter to use within the MetricsConnection class.
> I was attempting to create a HadoopMetrics2Reporter that extends the codahale 
> ScheduledReporter and reports the MetricsConnection metrics to Hadoop's 
> 

[jira] [Commented] (HBASE-18403) [Shell]Truncate permission required

2017-07-18 Thread Yun Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092444#comment-16092444
 ] 

Yun Zhao commented on HBASE-18403:
--

[~mdrob] tks,I try to add a test.

> [Shell]Truncate permission required
> ---
>
> Key: HBASE-18403
> URL: https://issues.apache.org/jira/browse/HBASE-18403
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yun Zhao
>Assignee: Yun Zhao
>Priority: Trivial
> Attachments: HBASE-18403.patch
>
>
> When a user has only (Create) permission to execute truncate, the table will 
> be deleted and not re-created



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18060) Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors

2017-07-18 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18060:
---
Fix Version/s: 1.5.0

> Backport to branch-1 HBASE-9774 HBase native metrics and metric collection 
> for coprocessors
> ---
>
> Key: HBASE-18060
> URL: https://issues.apache.org/jira/browse/HBASE-18060
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 1.4.0, 1.5.0
>
> Attachments: HBASE-18060.branch-1.3.v1.patch, 
> HBASE-18060.branch-1.3.v2.patch, HBASE-18060.branch-1.3.v3.patch, 
> HBASE-18060.branch-1.3.v4.patch, HBASE-18060.branch-1.3.v5.patch, 
> HBASE-18060.branch-1.v1.patch, HBASE-18060.branch-1.v2.patch, 
> HBASE-18060.branch-1.v3.patch, HBASE-18060.branch-1.v4.patch, 
> HBASE-18060.branch-1.v5.patch, HBASE-18060.branch-1.v6.patch
>
>
> I'd like to explore backporting HBASE-9774 to branch-1, as the ability for 
> coprocessors to report custom metrics through HBase is useful for us, and if 
> we have coprocessors use the native API, a re-write won't be necessary after 
> an upgrade to 2.0.
> The main issues I see so far are:
> - the usage of Java 8 language features.  Seems we can work around this as 
> most of it is syntactic sugar.  Will need to find a backport for LongAdder
> - dropwizard 3.1.2 in Master.  branch-1 is still on yammer metrics 2.2.  Not 
> sure if these can coexist just for this feature



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18405) Track scope for HBase-Spark module

2017-07-18 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092241#comment-16092241
 ] 

Sean Busbey commented on HBASE-18405:
-

{quote}
bq. I thought that's what I read. I also thought works-correctly-enough made it 
into 2.0. lemme go jira hunting.
SPARK-14743 is the one I'm thinking of.
{quote}

SPARK-12523 works well enough for us to still do 2.0, IMO. If we want feature 
parity with SHC then we'd need to add a similar credential manager, which will 
require SPARK-14743, but that all gets configured outside of the spark 
application AFAICT.

> Track scope for HBase-Spark module
> --
>
> Key: HBASE-18405
> URL: https://issues.apache.org/jira/browse/HBASE-18405
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 2.0.0-beta-1
>
> Attachments: Apache HBase - Apache Spark Integration Scope.pdf
>
>
> Start with [\[DISCUSS\]  status of and plans for our hbase-spark integration 
> |https://lists.apache.org/thread.html/fd74ef9b9da77abf794664f06ea19c839fb3d543647fb29115081683@%3Cdev.hbase.apache.org%3E]
>  and formalize into a scope document for bringing this feature into a release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18086) Create native client which creates load on selected cluster

2017-07-18 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092265#comment-16092265
 ] 

Enis Soztutar commented on HBASE-18086:
---

bq. Updated patch v12 where random number generation is lifted outside the loop 
(it was observed that write performance suffered with random number generation 
inside the loop).
It does not make sense to me that random number generation is costly. I've 
looked at the folly code, there is nothing explaining it. Can you please verify 
the total number of columns written in each case. You can also test with just 
generating 1M or so random numbers in a loop and measure the total time it 
takes end to end. We want each row to come with a different number of columns. 

- No use of {{new}} or {{delete}}. Always use smart pointers. 
{code}
+std::thread *writer_threads = new std::thread[FLAGS_threads];
{code}

- These flags should have the same names as the ones in simple-client.cc: 
{code}
+DEFINE_int32(multi_get_size, 1, "number of gets in one multi-get");
+DEFINE_bool(skip_get, false, "skip get / scan");
+DEFINE_bool(skip_put, false, "skip put's");
{code} 
there is also report_num_rows, scans and multigets and conf flags that you 
should implement.

- These should be return values instead of passing pointer to the methods: 
{code}
bool *succeeded
{code}

- Instead of executing every Cell as a different Put via Table::Put(), you 
should construct one Put object, add all the Cells, then call Table::Put() 
{code}
for (uint64_t j = 0; j < rows; j++) {
+std::string row = PrefixZero(width, iteration * rows + j);
+for (auto family : families) {
+  table->Put(Put{row}.AddColumn(family, kNumColumn, 
std::to_string(n_cols)));
+  for (unsigned int k = 1; k <= n_cols; k++) {
+table->Put(Put{row}.AddColumn(family, std::to_string(k), row));
+  }
+}
{code}

- Instead of this method: 
{code}
+std::string PrefixZero(int total_width, int num) {
{code}
you can probably do something like this (from scanner-test.cc): 
{code}
std::string Row(uint32_t i, int width) {
  std::ostringstream s;
  s.fill('0');
  s.width(width);
  s << i;
  return "row" + s.str();
}
{code}

- Scans and gets should validate the obtained Result using the same logic, no? 
I think you should extract that into a function and use it from both. 
- The way we do multi-gets will result in all of the multi-get requests go to 
the same region. Instead, I think it is better to have the multi-gets scattered 
around most of the regions, so that we have a high likelihood of testing server 
failure handling, etc when chaos monkey is run with this. I had argued the same 
in my above comments. I think we can do something like a hash-like striping 
across the row key space among threads, rather than range-based striping. That 
should give us the ability to do multi-gets across all the regions in one 
{{Table::Get(std::vector)}} call. 
 - We don't have multi-put functionality right now, but when that is added, we 
should do a follow up patch for this to add multi-put functionality. 
- These should default to {{load_test_table}} and {{f}} respectively. 
{code}
+DEFINE_string(table, "t", "What table to do the reads and writes with");
+DEFINE_string(families, "d", "comma separated list of column family names");
{code}

> Create native client which creates load on selected cluster
> ---
>
> Key: HBASE-18086
> URL: https://issues.apache.org/jira/browse/HBASE-18086
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 18086.v11.txt, 18086.v12.txt, 18086.v14.txt, 
> 18086.v1.txt, 18086.v3.txt, 18086.v4.txt, 18086.v5.txt, 18086.v6.txt, 
> 18086.v7.txt, 18086.v8.txt
>
>
> This task is to create a client which uses multiple threads to conduct Puts 
> followed by Gets against selected cluster.
> Default is to run the tool against local cluster.
> This would give us some idea on the characteristics of native client in terms 
> of handling high load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18401) Region Replica shows up in meta table after split

2017-07-18 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-18401:
-
Attachment: HBASE-18401.master.001.patch

> Region Replica shows up in meta table after split
> -
>
> Key: HBASE-18401
> URL: https://issues.apache.org/jira/browse/HBASE-18401
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 1.3.1, 1.2.6, 2.0.0-alpha-1
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Critical
> Fix For: 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18401-branch-1.2-v001.patch, 
> HBASE-18401-branch-1.2-v002.patch, HBASE-18401-branch-1.2-v003.patch, 
> HBASE-18401.master.001.patch
>
>
> Read replica is broken in branch-1, after the region split, we saw replica 
> region as content in hbase:meta, while the previous behavior is that replica 
> region should not show up in info:regioninfo.
> {code}
> t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:regioninfo, 
> timestamp=1500340472406, value={ENCODED => d8faa669dde775c323f6e55fd5aa36e0, 
> NAME => 't1,r2111,1500340472229_0001.d8faa669dde7
>  e73ca. 75c323f6e55fd5aa36e0.', 
> STARTKEY => 'r2111', ENDKEY => 'r2', REPLICA_ID => 1} 
> 
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen, timestamp=1500340472379, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
>  
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:seqnumDuringOpen_0001, timestamp=1500340472406, 
> value=\x00\x00\x00\x00\x00\x00\x00\x02
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server, 
> timestamp=1500340472379, value=dhcp-172-16-1-203.pa.cloudera.com:59105
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 column=info:server_0001, 
> timestamp=1500340472406, value=dhcp-172-16-1-203.pa.cloudera.com:59105
>
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode, timestamp=1500340472379, value=1500340443589 
> 
>  e73ca.   
>   
>   
>  t1,r2111,1500340472229.814f68c2d6e92dd8acb82a55706 
> column=info:serverstartcode_0001, timestamp=1500340472406, 
> value=\x00\x00\x01]SBY\xC5
> {code}
> This was introduced by 
> https://github.com/apache/hbase/blame/branch-1-HBASE-18147/hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java#L1464
> It does not consider that case that regionInfo could come from a replica 
> region.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >