[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-16 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089325#comment-16089325
 ] 

Ashish Singhi commented on HBASE-18377:
---

LGTM

> Error handling for FileNotFoundException should consider RemoteException in 
> ReplicationSource#openReader()
> --
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: 18377.branch-1.3.txt, 18377.v1.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Attachment: HBASE-18390.v02.patch

Trigger test in hbase-server.
Have no idea how to simulate scanning meta failure. If we can simulate we can 
add a test here.

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch, HBASE-18390.v02.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089307#comment-16089307
 ] 

Hadoop QA commented on HBASE-18390:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m  3s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
37s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18390 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877522/HBASE-18390.v01.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 9fb12a3e498a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2d5a0fb |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7670/testReport/ |
| modules | C: hbase-client U: hbase-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7670/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil 

[jira] [Comment Edited] (HBASE-18381) HBase regionserver crashes when reading MOB file with column qualifier >64MB

2017-07-16 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089289#comment-16089289
 ] 

Jingcheng Du edited comment on HBASE-18381 at 7/17/17 4:25 AM:
---

Thanks [~te...@apache.org], this is not related with MOB, this is because of 
the default protobuf size limit of 64MB, non-MOB cells have the same limit too.
Your changes in HFile are not necessary in the patch, the test can pass with 
only the changes in the configurations which means the test should pass if the 
configuration is set properly in branch-2. And I don't think this would be an 
issue in branch-2 and latter branches.
64MB is too large and we have filed a chapter to explain why we don't recommend 
it in https://hbase.apache.org/book.html#faq. Besides, we have a test to 
address the large cell size in TestMobStoreScanner#testGetMassive.

Hi [~huaxiang],
[~djelinski] ran into this issue in CDH release (on HBase 1.2.0-cdh5.10.0), 
would you mind trying in your env? Thanks a lot.


was (Author: jingcheng.du):
Thanks [~te...@apache.org], this is not related with MOB, this is because of 
the default protobuf size limit of 64MB, non-MOB cells have the same limit too.
Your changes in HFile are not necessary in the patch, the test can pass with 
only the changes in the configurations which means the test should pass if the 
configuration is set properly in branch-2. And I don't think this would be an 
issue in branch-2 and latter branches.
64MB is too large and we have filed a chapter to explain why we don't recommend 
it in https://hbase.apache.org/book.html#faq. Besides, we have a test to 
address the large cell size in TestMobStoreScanner#testGetMassive.

Hi [~huaxiang], [~djelinski] ran into this issue in CDH release (on HBase 
1.2.0-cdh5.10.0), would you mind trying in your env? Thanks a lot.

> HBase regionserver crashes when reading MOB file with column qualifier >64MB
> 
>
> Key: HBASE-18381
> URL: https://issues.apache.org/jira/browse/HBASE-18381
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0-alpha-1
> Environment:  HBase 1.2.0-cdh5.10.0
>Reporter: Daniel Jelinski
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 18381.v1.txt, 18381.v2.txt
>
>
> After putting a cell with 64MB column qualifier to a MOB-enabled table, 
> region server crashes when flushing data. Subsequent WAL replay attempts also 
> result in region server crashes.
> Gist of code used to create the table:
> private String table = "poisonPill";
> private byte[] familyBytes = Bytes.toBytes("cf");
> private void createTable(Connection conn) throws IOException {
>Admin hbase_admin = conn.getAdmin();
>HTableDescriptor htable = new HTableDescriptor(TableName.valueOf(table));
>HColumnDescriptor hfamily = new HColumnDescriptor(familyBytes);
>hfamily.setMobEnabled(true);
>htable.setConfiguration("hfile.format.version","3");
>htable.addFamily(hfamily);
>hbase_admin.createTable(htable);
> }
> private void killTable(Connection conn) throws IOException {
>Table tbl = conn.getTable(TableName.valueOf(table));
>byte[] data = new byte[1<<26];
>byte[] smalldata = new byte[0];
>Put put = new Put(Bytes.toBytes("1"));
>put.addColumn(familyBytes, data, smalldata);
>tbl.put(put);
> }
> Region server logs (redacted):
> 2017-07-11 09:34:53,747 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> Flushing 1/1 column families, memstore=64.00 MB; WAL is null, using passed 
> sequenceid=7
> 2017-07-11 09:34:53,757 WARN org.apache.hadoop.hbase.io.hfile.HFileWriterV2: 
> A minimum HFile version of 3 is required to support cell attributes/tags. 
> Consider setting hfile.format.version accordingly.
> 2017-07-11 09:34:54,504 INFO 
> org.apache.hadoop.hbase.mob.DefaultMobStoreFlusher: Flushed, sequenceid=7, 
> memsize=67109096, hasBloomFilter=true, into tmp file 
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb
> 2017-07-11 09:34:54,694 ERROR org.apache.hadoop.hbase.regionserver.HStore: 
> Failed to open store file : 
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb,
>  keeping it in tmp location
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
> Trailer from file 
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> 

[jira] [Commented] (HBASE-18381) HBase regionserver crashes when reading MOB file with column qualifier >64MB

2017-07-16 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089289#comment-16089289
 ] 

Jingcheng Du commented on HBASE-18381:
--

Thanks [~te...@apache.org], this is not related with MOB, this is because of 
the default protobuf size limit of 64MB, non-MOB cells have the same limit too.
Your changes in HFile are not necessary in the patch, the test can pass with 
only the changes in the configurations which means the test should pass if the 
configuration is set properly in branch-2. And I don't think this would be an 
issue in branch-2 and latter branches.
64MB is too large and we have filed a chapter to explain why we don't recommend 
it in https://hbase.apache.org/book.html#faq. Besides, we have a test to 
address the large cell size in TestMobStoreScanner#testGetMassive.

Hi [~huaxiang], [~djelinski] ran into this issue in CDH release (on HBase 
1.2.0-cdh5.10.0), would you mind trying in your env? Thanks a lot.

> HBase regionserver crashes when reading MOB file with column qualifier >64MB
> 
>
> Key: HBASE-18381
> URL: https://issues.apache.org/jira/browse/HBASE-18381
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0-alpha-1
> Environment:  HBase 1.2.0-cdh5.10.0
>Reporter: Daniel Jelinski
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 18381.v1.txt, 18381.v2.txt
>
>
> After putting a cell with 64MB column qualifier to a MOB-enabled table, 
> region server crashes when flushing data. Subsequent WAL replay attempts also 
> result in region server crashes.
> Gist of code used to create the table:
> private String table = "poisonPill";
> private byte[] familyBytes = Bytes.toBytes("cf");
> private void createTable(Connection conn) throws IOException {
>Admin hbase_admin = conn.getAdmin();
>HTableDescriptor htable = new HTableDescriptor(TableName.valueOf(table));
>HColumnDescriptor hfamily = new HColumnDescriptor(familyBytes);
>hfamily.setMobEnabled(true);
>htable.setConfiguration("hfile.format.version","3");
>htable.addFamily(hfamily);
>hbase_admin.createTable(htable);
> }
> private void killTable(Connection conn) throws IOException {
>Table tbl = conn.getTable(TableName.valueOf(table));
>byte[] data = new byte[1<<26];
>byte[] smalldata = new byte[0];
>Put put = new Put(Bytes.toBytes("1"));
>put.addColumn(familyBytes, data, smalldata);
>tbl.put(put);
> }
> Region server logs (redacted):
> 2017-07-11 09:34:53,747 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> Flushing 1/1 column families, memstore=64.00 MB; WAL is null, using passed 
> sequenceid=7
> 2017-07-11 09:34:53,757 WARN org.apache.hadoop.hbase.io.hfile.HFileWriterV2: 
> A minimum HFile version of 3 is required to support cell attributes/tags. 
> Consider setting hfile.format.version accordingly.
> 2017-07-11 09:34:54,504 INFO 
> org.apache.hadoop.hbase.mob.DefaultMobStoreFlusher: Flushed, sequenceid=7, 
> memsize=67109096, hasBloomFilter=true, into tmp file 
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb
> 2017-07-11 09:34:54,694 ERROR org.apache.hadoop.hbase.regionserver.HStore: 
> Failed to open store file : 
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb,
>  keeping it in tmp location
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
> Trailer from file 
> hdfs://sandbox/hbase/data/default/poisonPill/f82e20f32302dfdd95c89ecc3be5a211/.tmp/7858d223eddd4199ad220fc77bb612eb
>   at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>   at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.(StoreFile.java:1105)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:265)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:404)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:509)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:499)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:675)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:667)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.validateStoreFile(HStore.java:1746)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:942)
>   at 
> 

[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Fix Version/s: 2.0.0-alpha-2
   3.0.0

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Affects Version/s: (was: 1.4.0)

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Fix Version/s: 1.1.12
   1.2.7
   1.3.2
   1.4.0

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2, 1.1.12
>
> Attachments: HBASE-18390.v01.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Affects Version/s: 1.4.0
   1.3.1
   1.2.6
   1.1.11
   2.0.0-alpha-1

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18390.v01.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Description: If RegionServerCallable#prepare failed when getRegionLocation, 
the location in this callable object is null. And before we retry we will 
sleep. However, when location is null we will sleep at least 10 seconds. And 
the request will be failed directly if operation timeout is less than 10 
seconds. I think it is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff 
sleeping logic is ok for most cases.  (was: If RegionServerCallable#prepare 
failed when getRegionLocation, the location in this callable object is null. 
And before we retry we will sleep. However, when location is null we will sleep 
at least 10 seconds. I think it is no need to keep MIN_WAIT_DEAD_SERVER logic. 
Use backoff sleeping logic is ok for most cases.)

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.4.0, 1.3.1, 1.2.6, 1.1.11, 2.0.0-alpha-1
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18390.v01.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. And the request will 
> be failed directly if operation timeout is less than 10 seconds. I think it 
> is no need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is 
> ok for most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Status: Patch Available  (was: Open)

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18390.v01.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. I think it is no 
> need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is ok for 
> most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18390:
--
Attachment: HBASE-18390.v01.patch

> Sleep too long when finding region location failed
> --
>
> Key: HBASE-18390
> URL: https://issues.apache.org/jira/browse/HBASE-18390
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18390.v01.patch
>
>
> If RegionServerCallable#prepare failed when getRegionLocation, the location 
> in this callable object is null. And before we retry we will sleep. However, 
> when location is null we will sleep at least 10 seconds. I think it is no 
> need to keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is ok for 
> most cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18389) Remove byte[] from sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-16 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18389:
-
Description: 
In ClassSize class and its internal static class, sizeOf() function has 2 
formal parameters: byte[] b and int len. But the function's internal logic does 
not use or refer to byte[] b. Could be removed.

{code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java|borderStyle=solid}
// Class of ClassSize
public static long sizeOf(byte[] b, int len) {
  return memoryLayout.sizeOf(b, len);
}

// Class of ClassSize.MemoryLayout
long sizeOf(byte[] b, int len) {
  return align(arrayHeaderSize() + len);
}

// Class of ClassSize.UnsafeLayout
long sizeOf(byte[] b, int len) {
  return align(arrayHeaderSize() + len * 
UnsafeAccess.theUnsafe.ARRAY_BYTE_INDEX_SCALE);
}
{code}

  was:
In ClassSize class and its internal static class, sizeOf() function has 2 
formal parameters: byte[] b and int len. But the function's internal logic does 
not use or refer to byte[] b. Could be removed.




> Remove byte[] from sizeOf() of ClassSize, ClassSize.MemoryLayout and 
> ClassSize.UnsafeLayout
> ---
>
> Key: HBASE-18389
> URL: https://issues.apache.org/jira/browse/HBASE-18389
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> In ClassSize class and its internal static class, sizeOf() function has 2 
> formal parameters: byte[] b and int len. But the function's internal logic 
> does not use or refer to byte[] b. Could be removed.
> {code:title=hbase-common/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java|borderStyle=solid}
> // Class of ClassSize
> public static long sizeOf(byte[] b, int len) {
>   return memoryLayout.sizeOf(b, len);
> }
> // Class of ClassSize.MemoryLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len);
> }
> // Class of ClassSize.UnsafeLayout
> long sizeOf(byte[] b, int len) {
>   return align(arrayHeaderSize() + len * 
> UnsafeAccess.theUnsafe.ARRAY_BYTE_INDEX_SCALE);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089273#comment-16089273
 ] 

Ted Yu commented on HBASE-18368:


+1

TestFiltersWithOr can be dropped at commit.

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18389) Remove byte[] from sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-16 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-18389:
-
Description: 
In ClassSize class and its internal static class, sizeOf() function has 2 
formal parameters: byte[] b and int len. But the function's internal logic does 
not use or refer to byte[] b. Could be removed.



> Remove byte[] from sizeOf() of ClassSize, ClassSize.MemoryLayout and 
> ClassSize.UnsafeLayout
> ---
>
> Key: HBASE-18389
> URL: https://issues.apache.org/jira/browse/HBASE-18389
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
>
> In ClassSize class and its internal static class, sizeOf() function has 2 
> formal parameters: byte[] b and int len. But the function's internal logic 
> does not use or refer to byte[] b. Could be removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18390) Sleep too long when finding region location failed

2017-07-16 Thread Phil Yang (JIRA)
Phil Yang created HBASE-18390:
-

 Summary: Sleep too long when finding region location failed
 Key: HBASE-18390
 URL: https://issues.apache.org/jira/browse/HBASE-18390
 Project: HBase
  Issue Type: Bug
Reporter: Phil Yang
Assignee: Phil Yang


If RegionServerCallable#prepare failed when getRegionLocation, the location in 
this callable object is null. And before we retry we will sleep. However, when 
location is null we will sleep at least 10 seconds. I think it is no need to 
keep MIN_WAIT_DEAD_SERVER logic. Use backoff sleeping logic is ok for most 
cases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18389) Remove byte[] from sizeOf() of ClassSize, ClassSize.MemoryLayout and ClassSize.UnsafeLayout

2017-07-16 Thread Xiang Li (JIRA)
Xiang Li created HBASE-18389:


 Summary: Remove byte[] from sizeOf() of ClassSize, 
ClassSize.MemoryLayout and ClassSize.UnsafeLayout
 Key: HBASE-18389
 URL: https://issues.apache.org/jira/browse/HBASE-18389
 Project: HBase
  Issue Type: Bug
  Components: util
Reporter: Xiang Li
Assignee: Xiang Li
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18388) Fix description on region page, explaining what a region name is made of

2017-07-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089247#comment-16089247
 ] 

Yu Li commented on HBASE-18388:
---

Nice catch and good suggestions. Mind uploading a patch? [~larsgeorge]

[~misty] FYI sir.

> Fix description on region page, explaining what a region name is made of
> 
>
> Key: HBASE-18388
> URL: https://issues.apache.org/jira/browse/HBASE-18388
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver, UI
>Affects Versions: 1.3.1, 2.0.0-alpha-1
>Reporter: Lars George
>Priority: Minor
>  Labels: beginner
>
> In the {{RegionListTmpl.jamon}} we have this:
> {code}
> Region names are made of the containing table's name, a comma,
> the start key, a comma, and a randomly generated region id.  To 
> illustrate,
> the region named
> domains,apache.org,5464829424211263407 is party to the table
> domains, has an id of 5464829424211263407 and the first 
> key
> in the region is apache.org.The hbase:meta 'table' 
> is an internal
> system table (or a 'catalog' table in db-speak).
> The hbase:meta table keeps a list of all regions in the system. The empty 
> key is used to denote
> table start and table end.  A region with an empty start key is the first 
> region in a table.
> If a region has both an empty start key and an empty end key, it's the 
> only region in the
> table. See http://hbase.org;>HBase Home for further 
> explication.
> {code}
> This is wrong and worded oddly. What needs to be fixed facts wise is:
> - Region names contain (separated by commas) the full table name (including 
> the namespace), the start key, the time the region was created, and finally a 
> dot with an MD5 hash of everything before the dot. For example: 
> {{test,,1499410125885.1544f69aeaf787755caa11d3567a9621.}}
> - The trailing dot is to distinguish legacy region names (like those used by 
> the {{hbase:meta}} table)
> - The MD5 hash is used as the directory name within the HBase storage 
> directories
> - The names for the meta table use a Jenkins hash instead, also leaving out 
> the trailing dot, for example {{hbase:meta,,1.1588230740}}. The time is 
> always set to {{1}}.
> - The start key is printed in safe characters, escaping unprintable characters
> - The link to the HBase home page to explain more is useless and should be 
> removed.
> - Also, for region replicas, the replica ID is inserted into the name, like 
> so {{replicatable,,1486289678486_0001.3e8b7655299b21b3038ff8d39062467f.}}, 
> see the {{_0001}} part.
> As for the wording, I would just make this all flow a little better, that "is 
> party of" sounds weird to me (IMHO).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17908) Upgrade guava

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089170#comment-16089170
 ] 

Hadoop QA commented on HBASE-17908:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 210 new or modified 
test files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  7m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  5m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-testing-util hbase-spark-it hbase-assembly . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
52s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hbase-rest in master has 3 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
27s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
4s{color} | {color:red} hbase-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
58s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-endpoint in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
6s{color} | {color:red} hbase-examples in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
8s{color} | {color:red} hbase-external-blockcache in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-hadoop-compat in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-hadoop2-compat in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-it in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-metrics in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-metrics-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-prefix-tree in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-procedure in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
7s{color} | {color:red} hbase-rest in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
5s{color} | {color:red} hbase-rsgroup in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
7s{color} | {color:red} hbase-spark in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
6s{color} | 

[jira] [Updated] (HBASE-17908) Upgrade guava

2017-07-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17908:
--
Attachment: HBASE-17908.master.025.patch

> Upgrade guava
> -
>
> Key: HBASE-17908
> URL: https://issues.apache.org/jira/browse/HBASE-17908
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: Balazs Meszaros
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-HBASE-17908-Upgrade-guava.022.patch, 
> HBASE-17908.master.001.patch, HBASE-17908.master.002.patch, 
> HBASE-17908.master.003.patch, HBASE-17908.master.004.patch, 
> HBASE-17908.master.005.patch, HBASE-17908.master.006.patch, 
> HBASE-17908.master.007.patch, HBASE-17908.master.008.patch, 
> HBASE-17908.master.009.patch, HBASE-17908.master.010.patch, 
> HBASE-17908.master.011.patch, HBASE-17908.master.012.patch, 
> HBASE-17908.master.013.patch, HBASE-17908.master.013.patch, 
> HBASE-17908.master.014.patch, HBASE-17908.master.015.patch, 
> HBASE-17908.master.015.patch, HBASE-17908.master.016.patch, 
> HBASE-17908.master.017.patch, HBASE-17908.master.018.patch, 
> HBASE-17908.master.019.patch, HBASE-17908.master.020.patch, 
> HBASE-17908.master.021.patch, HBASE-17908.master.021.patch, 
> HBASE-17908.master.022.patch, HBASE-17908.master.023.patch, 
> HBASE-17908.master.024.patch, HBASE-17908.master.025.patch
>
>
> Currently we are using guava 12.0.1, but the latest version is 21.0. 
> Upgrading guava is always a hassle because it is not always backward 
> compatible with itself.
> Currently I think there are to approaches:
> 1. Upgrade guava to the newest version (21.0) and shade it.
> 2. Upgrade guava to a version which does not break or builds (15.0).
> If we can update it, some dependencies should be removed: 
> commons-collections, commons-codec, ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17908) Upgrade guava

2017-07-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089147#comment-16089147
 ] 

stack commented on HBASE-17908:
---

.025 rebase.

> Upgrade guava
> -
>
> Key: HBASE-17908
> URL: https://issues.apache.org/jira/browse/HBASE-17908
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: Balazs Meszaros
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-HBASE-17908-Upgrade-guava.022.patch, 
> HBASE-17908.master.001.patch, HBASE-17908.master.002.patch, 
> HBASE-17908.master.003.patch, HBASE-17908.master.004.patch, 
> HBASE-17908.master.005.patch, HBASE-17908.master.006.patch, 
> HBASE-17908.master.007.patch, HBASE-17908.master.008.patch, 
> HBASE-17908.master.009.patch, HBASE-17908.master.010.patch, 
> HBASE-17908.master.011.patch, HBASE-17908.master.012.patch, 
> HBASE-17908.master.013.patch, HBASE-17908.master.013.patch, 
> HBASE-17908.master.014.patch, HBASE-17908.master.015.patch, 
> HBASE-17908.master.015.patch, HBASE-17908.master.016.patch, 
> HBASE-17908.master.017.patch, HBASE-17908.master.018.patch, 
> HBASE-17908.master.019.patch, HBASE-17908.master.020.patch, 
> HBASE-17908.master.021.patch, HBASE-17908.master.021.patch, 
> HBASE-17908.master.022.patch, HBASE-17908.master.023.patch, 
> HBASE-17908.master.024.patch, HBASE-17908.master.025.patch
>
>
> Currently we are using guava 12.0.1, but the latest version is 21.0. 
> Upgrading guava is always a hassle because it is not always backward 
> compatible with itself.
> Currently I think there are to approaches:
> 1. Upgrade guava to the newest version (21.0) and shade it.
> 2. Upgrade guava to a version which does not break or builds (15.0).
> If we can update it, some dependencies should be removed: 
> commons-collections, commons-codec, ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18388) Fix description on region page, explaining what a region name is made of

2017-07-16 Thread Lars George (JIRA)
Lars George created HBASE-18388:
---

 Summary: Fix description on region page, explaining what a region 
name is made of
 Key: HBASE-18388
 URL: https://issues.apache.org/jira/browse/HBASE-18388
 Project: HBase
  Issue Type: Improvement
  Components: master, regionserver, UI
Affects Versions: 2.0.0-alpha-1, 1.3.1
Reporter: Lars George
Priority: Minor


In the {{RegionListTmpl.jamon}} we have this:

{code}
Region names are made of the containing table's name, a comma,
the start key, a comma, and a randomly generated region id.  To illustrate,
the region named
domains,apache.org,5464829424211263407 is party to the table
domains, has an id of 5464829424211263407 and the first 
key
in the region is apache.org.The hbase:meta 'table' is 
an internal
system table (or a 'catalog' table in db-speak).
The hbase:meta table keeps a list of all regions in the system. The empty 
key is used to denote
table start and table end.  A region with an empty start key is the first 
region in a table.
If a region has both an empty start key and an empty end key, it's the only 
region in the
table. See http://hbase.org;>HBase Home for further 
explication.
{code}

This is wrong and worded oddly. What needs to be fixed facts wise is:

- Region names contain (separated by commas) the full table name (including the 
namespace), the start key, the time the region was created, and finally a dot 
with an MD5 hash of everything before the dot. For example: 
{{test,,1499410125885.1544f69aeaf787755caa11d3567a9621.}}
- The trailing dot is to distinguish legacy region names (like those used by 
the {{hbase:meta}} table)
- The MD5 hash is used as the directory name within the HBase storage 
directories
- The names for the meta table use a Jenkins hash instead, also leaving out the 
trailing dot, for example {{hbase:meta,,1.1588230740}}. The time is always set 
to {{1}}.
- The start key is printed in safe characters, escaping unprintable characters
- The link to the HBase home page to explain more is useless and should be 
removed.
- Also, for region replicas, the replica ID is inserted into the name, like so 
{{replicatable,,1486289678486_0001.3e8b7655299b21b3038ff8d39062467f.}}, see the 
{{_0001}} part.

As for the wording, I would just make this all flow a little better, that "is 
party of" sounds weird to me (IMHO).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18387) [Thrift] Make principal configurable in DemoClient.java

2017-07-16 Thread Lars George (JIRA)
Lars George created HBASE-18387:
---

 Summary: [Thrift] Make principal configurable in DemoClient.java
 Key: HBASE-18387
 URL: https://issues.apache.org/jira/browse/HBASE-18387
 Project: HBase
  Issue Type: Improvement
Reporter: Lars George
Priority: Minor


In the Thrift1 demo client we have this code:

{code}
transport = new TSaslClientTransport("GSSAPI", null,
  "hbase", // Thrift server user name, should be an authorized proxy user.
  host, // Thrift server domain
  saslProperties, null, transport);
{code}

This will only work when the Thrift server is started with the {{hbase}} 
principal. Often this may deviate, for example I am using {{hbase-thrift}} to 
separate the names from those of backend servers. 

What we need is either an additional command line option to specify the name, 
or a property that can be set with -D and can be passed at runtime. I prefer 
the former, as the latter is making this a little convoluted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089054#comment-16089054
 ] 

Hadoop QA commented on HBASE-18368:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
45s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m  6s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m  
8s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18368 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877476/HBASE-18368.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 61825c12f7ae 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 2d5a0fb |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7668/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7668/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 

[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088992#comment-16088992
 ] 

Chia-Ping Tsai commented on HBASE-18368:


bq. I am fine with not using minicluster for test (if Chia-ping thinks so too).
I am fine w/o origin test. Let us focus on the bug. The ghost of data 
inconsistency still lingers in our dear HBase.

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088979#comment-16088979
 ] 

Hadoop QA commented on HBASE-18368:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
45s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} branch-1 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
58s{color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
2s{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} branch-1 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} branch-1 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} hbase-client in the patch passed with JDK 
v1.8.0_131. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} hbase-server-jdk1.8.0_131 with JDK v1.8.0_131 
generated 0 new + 5 unchanged - 5 fixed = 5 total (was 10) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} hbase-client in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} hbase-server-jdk1.7.0_131 with JDK v1.7.0_131 
generated 0 new + 5 unchanged - 5 fixed = 5 total (was 10) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 37s{color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} hbase-client-jdk1.8.0_131 with JDK v1.8.0_131 
generated 0 new + 13 unchanged - 13 fixed = 13 total (was 26) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hbase-server-jdk1.8.0_131 with JDK v1.8.0_131 
generated 0 new + 3 unchanged - 3 fixed = 3 total 

[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088968#comment-16088968
 ] 

Ted Yu commented on HBASE-18368:


Modified previous comment.

I am fine with not using minicluster for test (if Chia-ping thinks so too).

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088949#comment-16088949
 ] 

Ted Yu edited comment on HBASE-18368 at 7/16/17 3:53 PM:
-

Edit: I ran TestFilterWithScanLimits#testFiltersWithOr in master branch based 
on commit 353627b39de73020dd2448b54c0f13f6902b19bf .
It didn't have assertion so please disregard.

TestFilterList#testFamilyFilterWithMustPassOne fails without patch.



was (Author: yuzhih...@gmail.com):
I ran the test from your previous patch in master branch based on commit 
353627b39de73020dd2448b54c0f13f6902b19bf .
It passed.

Did you change the test in v3 ?

Please include Peter's test in your next patch (for master branch).

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18052) Add document for async admin

2017-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088960#comment-16088960
 ] 

Hudson commented on HBASE-18052:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3383 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3383/])
HBASE-18052 Add document for async admin (zghao: rev 
2d5a0fbd16ddd9d46ab3f72cabd06a853df4916b)
* (edit) src/main/asciidoc/_chapters/architecture.adoc


> Add document for async admin
> 
>
> Key: HBASE-18052
> URL: https://issues.apache.org/jira/browse/HBASE-18052
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18052.master.001.patch, 
> HBASE-18052.master.002.patch, HBASE-18052.master.003.patch, 
> HBASE-18052.master.004.patch, HBASE-18052.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-18368:
---
Attachment: HBASE-18368.patch

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088959#comment-16088959
 ] 

Allan Yang commented on HBASE-18368:


{quote}
I ran the test from your previous patch in master branch based on commit 
353627b39de73020dd2448b54c0f13f6902b19bf .
It passed.
{quote}
That is very very strange. Updated a patch for master branch, Peter's original 
test is include(added a assert to fail the test, since Peter's original one is 
just to print the result). If FilterList.java is not patched, this test and my 
UT should fail. You can try the patch, [~tedyu].
But, I still think we should not include the original test, In order to just to 
test the function of filters, starting a mini cluster, and then write some data 
is too 'heavy'.

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch, 
> HBASE-18368.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088949#comment-16088949
 ] 

Ted Yu commented on HBASE-18368:


I ran the test from your previous patch in master branch based on commit 
353627b39de73020dd2448b54c0f13f6902b19bf .
It passed.

Did you change the test in v3 ?

Please include Peter's test in your next patch (for master branch).

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-18368:
---
Attachment: HBASE-18368.branch-1.v3.patch

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088947#comment-16088947
 ] 

Allan Yang commented on HBASE-18368:


[~anoop.hbase] you are right, matchingRowColumn have already checked faimiles.  
Updated a v3 patch. Only the logic in return code with NEXT_ROW need to be 
changed. As I said before, the root cause of this bug is that  HBASE-17678 
introduced a list to record previous return code and cell.
  If the filter list is MUST_PASS_ONE and one filter has returned NEXT_ROW, if 
the next cell arrivals at the filter has the same row as previous one, it can 
bypass the filter and skip this cell since it want a "next row". 
 But, this is not the case for FamilyFilter.  HBASE-13122 has introduced a 
optimization for it. Instead of returning "SKIP" when family not matching, it 
return "NEXT_ROW".

So, my way to fix this bug is that, we should check family before we can bypass 
the filter.


> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, 
> HBASE-18368.branch-1.v2.patch, HBASE-18368.branch-1.v3.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18386) When the zookeeper host cannot be resolved, provide better error message

2017-07-16 Thread Ted Yu (JIRA)
Ted Yu created HBASE-18386:
--

 Summary: When the zookeeper host cannot be resolved, provide 
better error message
 Key: HBASE-18386
 URL: https://issues.apache.org/jira/browse/HBASE-18386
 Project: HBase
  Issue Type: Sub-task
Reporter: Ted Yu
Priority: Minor


Currently, when the zookeeper host cannot be resolved, we would see:
{code}
2017-07-15 19:42:09,367:3811(0x7f9ee2ffd700):ZOO_ERROR@getaddrs@613: 
getaddrinfo: No such file or directory

*** Aborted at 1500147729 (unix time) try "date -d @1500147729" if you are 
using GNU date ***
PC: @   0x6f4c28 zoo_get
*** SIGSEGV (@0x260) received by PID 3811 (TID 0x7f9ee2ffd700) from PID 608; 
stack trace: ***
@ 0x7f9eee0923d0 (unknown)
@   0x6f4c28 zoo_get
@   0xb4 hbase::LocationCache::ReadMetaLocation()
@   0x449ec4 std::_Function_handler<>::_M_invoke()
@   0x55ec12 wangle::ThreadPoolExecutor::runTask()
@   0x54dfca wangle::CPUThreadPoolExecutor::threadRun()
@   0x55f892 std::_Function_handler<>::_M_invoke()
{code}
There should be more intuitive error message so that user knows what to do next.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-16 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088943#comment-16088943
 ] 

Allan Yang commented on HBASE-18368:


[~tedyu], sorry for the delay. Have you updated your local branch so that 
HBASE-17678 is included? Just tried Peter 's UT in master branch, it failed.
Every branch HBASE-17678 have been committed to should fail peter's UT, so 
should my UT in the patch. They are at the same in the logic.

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, HBASE-18368.branch-1.v2.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-07-16 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088913#comment-16088913
 ] 

Chia-Ping Tsai commented on HBASE-18213:


bq. I'm not sure but in the past I only committed doc changes to master.
Should we remove the tag "2.0.0-alpha-2" from the fix version ?


> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch, HBASE-18213-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-07-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088898#comment-16088898
 ] 

Duo Zhang commented on HBASE-18213:
---

I'm not sure but in the past I only committed doc changes to master. I think 
you can send an email to the dev and user list to discuss this.

Thanks.

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch, HBASE-18213-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-07-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088891#comment-16088891
 ] 

Guanghao Zhang commented on HBASE-18213:


But when user use hbase 2.0, user may still need the generate document?

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch, HBASE-18213-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-17359) Implement async admin

2017-07-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang resolved HBASE-17359.

   Resolution: Fixed
Fix Version/s: (was: 2.0.0)
   2.0.0-alpha-2
   3.0.0

All sub-tasks have been resolved.

> Implement async admin
> -
>
> Key: HBASE-17359
> URL: https://issues.apache.org/jira/browse/HBASE-17359
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>  Labels: asynchronous
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> And as we will return a CompletableFuture, I think we can just remove the 
> XXXAsync methods, and make all the methods blocking which means we will only 
> finish the CompletableFuture when the operation is done. User can choose 
> whether to wait on the returned CompletableFuture.
> Convert this to a umbrella task. There maybe some sub-tasks.
> 1. Table admin operations.
> 2. Region admin operations.
> 3. Namespace admin operations.
> 4. Snapshot admin operations.
> 5. Replication admin operations.
> 6. Other operations, like quota, balance..



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18052) Add document for async admin

2017-07-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18052:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks [~Apache9] for reviewing.

> Add document for async admin
> 
>
> Key: HBASE-18052
> URL: https://issues.apache.org/jira/browse/HBASE-18052
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18052.master.001.patch, 
> HBASE-18052.master.002.patch, HBASE-18052.master.003.patch, 
> HBASE-18052.master.004.patch, HBASE-18052.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-07-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088858#comment-16088858
 ] 

Duo Zhang commented on HBASE-18213:
---

No. We only generate doc from the master branch.

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch, HBASE-18213-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-07-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088856#comment-16088856
 ] 

Guanghao Zhang commented on HBASE-18213:


[~Apache9] This patch didn't pushed to branch-2?

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch, HBASE-18213-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18052) Add document for async admin

2017-07-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088841#comment-16088841
 ] 

Guanghao Zhang commented on HBASE-18052:


Attach a 005 patch which addressed review comments.

> Add document for async admin
> 
>
> Key: HBASE-18052
> URL: https://issues.apache.org/jira/browse/HBASE-18052
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18052.master.001.patch, 
> HBASE-18052.master.002.patch, HBASE-18052.master.003.patch, 
> HBASE-18052.master.004.patch, HBASE-18052.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18052) Add document for async admin

2017-07-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18052:
---
Attachment: HBASE-18052.master.005.patch

> Add document for async admin
> 
>
> Key: HBASE-18052
> URL: https://issues.apache.org/jira/browse/HBASE-18052
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18052.master.001.patch, 
> HBASE-18052.master.002.patch, HBASE-18052.master.003.patch, 
> HBASE-18052.master.004.patch, HBASE-18052.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-07-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088839#comment-16088839
 ] 

Anoop Sam John commented on HBASE-17819:


We have a config to say whether blocks belonging to an HFile to be evicted when 
the file is closed. 
key : 'hbase.rs.evictblocksonclose'
This defaults to false only.
Means the blocks wont be forcefully evicted when a file is closed but 
eventually the LRU nature will remove these blocks.
This is what was happening before we had CompactedHFilesDischarger? 
[~ram_krish]?
Now CompactedHFilesDischarger seems not considering this config!

> Reduce the heap overhead for BucketCache
> 
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> We keep Bucket entry map in BucketCache.  Below is the math for heapSize for 
> the key , value into this map.
> BlockCacheKey
> ---
> String hfileName  -  Ref  - 4
> long offset  - 8
> BlockType blockType  - Ref  - 4
> boolean isPrimaryReplicaBlock  - 1
> Total  =  12 (Object) + 17 = 29
> BucketEntry
> 
> int offsetBase  -  4
> int length  - 4
> byte offset1  -  1
> byte deserialiserIndex  -  1
> long accessCounter  -  8
> BlockPriority priority  - Ref  - 4
> volatile boolean markedForEvict  -  1
> AtomicInteger refCount  -  16 + 4
> long cachedTime  -  8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry  -  40
> blocksByHFile ConcurrentSkipListSet Entry  -  40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.  
> This jira aims to reduce this as much as possible



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17819) Reduce the heap overhead for BucketCache

2017-07-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088838#comment-16088838
 ] 

Anoop Sam John commented on HBASE-17819:


These are things trying out
1. We have 2 Enum refs in key and BucketEntry . Changing those to bytes types 
and just storing the ordinal. We have few items only in the Enum and byte type 
is enough   
Result : Saving 6 bytes per entry
2. Changing the BucketEntry so that we have 2 classes for BucketEntries to 
IOEngine like File mode and RAM backed IOEngine. Only in RAM backed, we need 
ref count way.  In file mode, we will remove this state and markedForEvict.
Result : Saving 21 bytes per entry for File mode
3. Changing the refCount type from AtomicInteger to be a volatile int.  
AtomicInteger object and its refs in BucketEntry takes 20 bytes where was an 
int can work with 4 bytes.  On the atomic increment/decrement, we will mimic 
what AtomicInteger is doing (Using unsafe CAS)
Result : Saving 16 bytes per entry for RAM backed IOEngine
4. Removing the CSLM for tracking per HFile blocks.   So for removing blocks 
when an HFile is closed, we will have to iterate over all bucket entries and 
check for its HFile and then remove. This is what we do in LRU cache. 
Considering this operation not happening in a hot path, it is ok? We are doing 
this when CompactedHFilesDischarger runs (in 2 mns interval) and remove all 
compacted away files.
Result : Saving 40 bytes per entry

> Reduce the heap overhead for BucketCache
> 
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> We keep Bucket entry map in BucketCache.  Below is the math for heapSize for 
> the key , value into this map.
> BlockCacheKey
> ---
> String hfileName  -  Ref  - 4
> long offset  - 8
> BlockType blockType  - Ref  - 4
> boolean isPrimaryReplicaBlock  - 1
> Total  =  12 (Object) + 17 = 29
> BucketEntry
> 
> int offsetBase  -  4
> int length  - 4
> byte offset1  -  1
> byte deserialiserIndex  -  1
> long accessCounter  -  8
> BlockPriority priority  - Ref  - 4
> volatile boolean markedForEvict  -  1
> AtomicInteger refCount  -  16 + 4
> long cachedTime  -  8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry  -  40
> blocksByHFile ConcurrentSkipListSet Entry  -  40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.  
> This jira aims to reduce this as much as possible



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-16 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088831#comment-16088831
 ] 

Anastasia Braginsky commented on HBASE-18375:
-

bq. +1 to fix there as well.
How do you think it should be fixed there? Separate JIRA? Separate patch?

In anyway, [~anoop.hbase], [~ram_krish], [~eshcar] please review the patch I 
have uploaded 2 days ago (here and on RB). 
So we can promote the fix at least in the master.
Thanks!!!

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)