[jira] [Commented] (HBASE-17388) Move ReplicationPeer and other replication related PB messages to the replication.proto

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803905#comment-15803905
 ] 

Hudson commented on HBASE-17388:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2267 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2267/])
HBASE-17388 Move ReplicationPeer and other replication related PB (zghao: rev 
e02ae7724ddaa147a7cf41dc398e09e456e0dad6)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationStateZKBase.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java
* (edit) hbase-protocol-shaded/src/main/protobuf/ZooKeeper.proto
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/master/TableCFsUpdater.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationSerDeHelper.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ReplicationProtos.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestPerTableCFReplication.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/ZooKeeperProtos.java
* (edit) hbase-protocol-shaded/src/main/protobuf/Replication.proto


> Move ReplicationPeer and other replication related PB messages to the 
> replication.proto
> ---
>
> Key: HBASE-17388
> URL: https://issues.apache.org/jira/browse/HBASE-17388
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17388.patch, HBASE-17388.patch, HBASE-17388.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17429) HBase bulkload cannot support HDFS viewFs

2017-01-05 Thread shenxianqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803883#comment-15803883
 ] 

shenxianqiang commented on HBASE-17429:
---

hdfs copy(-cp) and move(-mv) is very different.
I mean before the patch,bulkload always copy(hdfs -cp) hfile to hbase region.
This leads to bulkload performance degrades dramatically.
OK?

> HBase bulkload cannot support HDFS viewFs
> -
>
> Key: HBASE-17429
> URL: https://issues.apache.org/jira/browse/HBASE-17429
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6.1, 1.2.4
> Environment: CDH5.7.0 hbase0.98.6
>Reporter: shenxianqiang
> Attachments: HBASE-17429.patch
>
>
> Since hadoop cluster suppor fedration,hbase bulkload performance degrades 
> dramatically. Even if hbase directory and bulkload directory in the same 
> nameservice.
> {quote}
> 2017-01-04 21:58:40,919 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem: use bulkload file 
> name is : hdfs://CloudTestNameNode2:8020
> 2017-01-04 21:58:40,924 ERROR org.apache.hadoop.ipc.RpcServer: Unexpected 
> throwable object 
> java.lang.IllegalArgumentException: Wrong FS: 
> viewfs://nsX/user/test/I/_tmp/9cde5dde60374b1483b7d09b65258304.top, expected: 
> hdfs://CloudTestNameNode2:8020
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:657)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1215)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1412)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:373)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:496)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:708)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3658)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3564)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3378)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29589)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-17290:
--
Attachment: HBASE-17290.branch-1.patch

Attached branch-1 patch.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.branch-1.patch, HBASE-17290.patch, 
> HBASE-17290.v1.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-01-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803759#comment-15803759
 ] 

stack commented on HBASE-14614:
---

New patch has more detailed changelog -- see patch -- after 1/3rd review. Fixed 
findbugs and mess above. TODO fix tests. Working on overview doc.

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.001.patch, 
> HBASE-14614.master.002.patch, HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-01-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: HBASE-14614.master.005.patch

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.001.patch, 
> HBASE-14614.master.002.patch, HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17431) Incorrect precheck condition in RoundRobinPool#get()

2017-01-05 Thread Ted Yu (JIRA)
Ted Yu created HBASE-17431:
--

 Summary: Incorrect precheck condition in RoundRobinPool#get()
 Key: HBASE-17431
 URL: https://issues.apache.org/jira/browse/HBASE-17431
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Here is related code:
{code}
public R get() {
  if (super.size() < maxSize) {
return null;
  }
  nextResource %= super.size();
{code}
Since super.size() is involved in modulo operation after the check, it seems 
the check should compare against 0 instead of maxSize.

Looks like a copy-paste error from put() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17429) HBase bulkload cannot support HDFS viewFs

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803654#comment-15803654
 ] 

Ted Yu commented on HBASE-17429:


bq. bulkload copy hfiles from src to dest(hbase)

The above sentence is repeated.
Can you elaborate a bit more ?

w.r.t. performance, have you collected metrics to back up your claim ?

> HBase bulkload cannot support HDFS viewFs
> -
>
> Key: HBASE-17429
> URL: https://issues.apache.org/jira/browse/HBASE-17429
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6.1, 1.2.4
> Environment: CDH5.7.0 hbase0.98.6
>Reporter: shenxianqiang
> Attachments: HBASE-17429.patch
>
>
> Since hadoop cluster suppor fedration,hbase bulkload performance degrades 
> dramatically. Even if hbase directory and bulkload directory in the same 
> nameservice.
> {quote}
> 2017-01-04 21:58:40,919 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem: use bulkload file 
> name is : hdfs://CloudTestNameNode2:8020
> 2017-01-04 21:58:40,924 ERROR org.apache.hadoop.ipc.RpcServer: Unexpected 
> throwable object 
> java.lang.IllegalArgumentException: Wrong FS: 
> viewfs://nsX/user/test/I/_tmp/9cde5dde60374b1483b7d09b65258304.top, expected: 
> hdfs://CloudTestNameNode2:8020
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:657)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1215)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1412)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:373)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:496)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:708)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3658)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3564)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3378)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29589)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17429) HBase bulkload cannot support HDFS viewFs

2017-01-05 Thread shenxianqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803643#comment-15803643
 ] 

shenxianqiang commented on HBASE-17429:
---

Yes,the patch does work.
before this patch,bulkload always copy hfile to hbase region directory,even if 
in the same nameservice.
after the patch:
In the same nameservice,bulkload move hfiles from src to dest(hbase).It has 
brought a greater bulkload performance.
The different nameservice,bulkload copy hfiles from src to dest(hbase).
{quote}
76  String nameService = 
serviceName.substring(serviceName.indexOf(":") + 1);
{quote}
It should check return value of indexOf(),but nameservice cannot be null.

> HBase bulkload cannot support HDFS viewFs
> -
>
> Key: HBASE-17429
> URL: https://issues.apache.org/jira/browse/HBASE-17429
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6.1, 1.2.4
> Environment: CDH5.7.0 hbase0.98.6
>Reporter: shenxianqiang
> Attachments: HBASE-17429.patch
>
>
> Since hadoop cluster suppor fedration,hbase bulkload performance degrades 
> dramatically. Even if hbase directory and bulkload directory in the same 
> nameservice.
> {quote}
> 2017-01-04 21:58:40,919 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem: use bulkload file 
> name is : hdfs://CloudTestNameNode2:8020
> 2017-01-04 21:58:40,924 ERROR org.apache.hadoop.ipc.RpcServer: Unexpected 
> throwable object 
> java.lang.IllegalArgumentException: Wrong FS: 
> viewfs://nsX/user/test/I/_tmp/9cde5dde60374b1483b7d09b65258304.top, expected: 
> hdfs://CloudTestNameNode2:8020
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:657)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1215)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1412)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:373)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:496)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:708)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3658)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3564)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3378)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29589)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803641#comment-15803641
 ] 

Phil Yang commented on HBASE-15995:
---

Hi

I have a plan to reduce WAL reading threads to reduce disk IO when we have many 
peers ( will create a new issue soon). Your work which separate WAL reading and 
entries pushing helps a lot. :)

Now we have two threads so ReplicationSource may be need to change a better 
name? We have one thread to read logs and one thread to push logs. One is 
ReplicationWALReader and another is ReplicationEntriesPusher? What do you 
think?  

TestSerialReplication failed, please fix it. I can give some help if you need.

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch, 
> replicationV1_100ms_delay.png, replicationV2_100ms_delay.png
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803638#comment-15803638
 ] 

Ashish Singhi commented on HBASE-17290:
---

Planning to commit this after an hour or so, if there is no further review 
comments.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch, HBASE-17290.v1.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17424) Protect REST client against malicious XML responses.

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803629#comment-15803629
 ] 

Hadoop QA commented on HBASE-17424:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
33m 41s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 2s 
{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845939/HBASE-17424.002.patch 
|
| JIRA Issue | HBASE-17424 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 3e72a1baae3d 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / e02ae77 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5156/testReport/ |
| modules | C: hbase-rest U: hbase-rest |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5156/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Protect REST client against malicious XML responses.
> 
>
> Key: HBASE-17424
> URL: https://issues.apache.org/jira/browse/HBASE-17424
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17424.001.patch, HBASE-17424.002.patch

[jira] [Commented] (HBASE-17424) Protect REST client against malicious XML responses.

2017-01-05 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803575#comment-15803575
 ] 

Josh Elser commented on HBASE-17424:


Cool beans. Thanks for the quick re-review, Ted!

Will wait to hear what precommit says, and then commit it around.

> Protect REST client against malicious XML responses.
> 
>
> Key: HBASE-17424
> URL: https://issues.apache.org/jira/browse/HBASE-17424
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17424.001.patch, HBASE-17424.002.patch
>
>
> If, by some means, an unsuspecting REST server client would get a malformed 
> response from the REST server, it could result in the client performing some 
> unintended action from the XML parsing.
> We should disable these extra options on the XML parser to prevent the 
> possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17424) Protect REST client against malicious XML responses.

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803571#comment-15803571
 ] 

Ted Yu commented on HBASE-17424:


Patch v2 looks good.

> Protect REST client against malicious XML responses.
> 
>
> Key: HBASE-17424
> URL: https://issues.apache.org/jira/browse/HBASE-17424
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17424.001.patch, HBASE-17424.002.patch
>
>
> If, by some means, an unsuspecting REST server client would get a malformed 
> response from the REST server, it could result in the client performing some 
> unintended action from the XML parsing.
> We should disable these extra options on the XML parser to prevent the 
> possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17424) Protect REST client against malicious XML responses.

2017-01-05 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17424:
---
Attachment: HBASE-17424.002.patch

.002 Thanks for the review, Ted. This turned out to be a really good exception 
as I had incorrect "bad" XML. I took a similar, but slightly different, 
approach to verify that the exception is the one we're expecting.

> Protect REST client against malicious XML responses.
> 
>
> Key: HBASE-17424
> URL: https://issues.apache.org/jira/browse/HBASE-17424
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17424.001.patch, HBASE-17424.002.patch
>
>
> If, by some means, an unsuspecting REST server client would get a malformed 
> response from the REST server, it could result in the client performing some 
> unintended action from the XML parsing.
> We should disable these extra options on the XML parser to prevent the 
> possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803463#comment-15803463
 ] 

Hadoop QA commented on HBASE-14614:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 60 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 17m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 17m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 59 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 42s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s 
{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s 
{color} | {color:red} hbase-server generated 3 new + 1 unchanged - 0 fixed = 4 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 19s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
3s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 181m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Dead store to timeout in 
org.apache.hadoop.hbase.master.MasterMetaBootstrap.assignMeta(Set, int)  At 

[jira] [Updated] (HBASE-17315) [C++] HBase Client and Table Implementation

2017-01-05 Thread Sudeep Sunthankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudeep Sunthankar updated HBASE-17315:
--
Status: Open  (was: Patch Available)

> [C++] HBase Client and Table Implementation
> ---
>
> Key: HBASE-17315
> URL: https://issues.apache.org/jira/browse/HBASE-17315
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-17315.HBASE-14850.v1.patch, 
> HBASE-17315.HBASE-14850.v2.patch, HBASE-17315.HBASE-14850.v3.patch, 
> HBASE-17315.HBASE-14850.v4.patch
>
>
> Consists of Client and Table implementation which will be used to call the 
> corresponding client methods i.e Get, Gets, Scan etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17315) [C++] HBase Client and Table Implementation

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803355#comment-15803355
 ] 

Hadoop QA commented on HBASE-17315:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 8s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 2s 
{color} | {color:green} HBASE-14850 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 1s 
{color} | {color:green} HBASE-14850 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
5s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
3s {color} | {color:green} the patch passed {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:date2017-01-06 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845925/HBASE-17315.HBASE-14850.v4.patch
 |
| JIRA Issue | HBASE-17315 |
| Optional Tests |  shellcheck  shelldocs  cc  compile  |
| uname | Linux 9e8c883f56d4 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | nobuild |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | HBASE-14850 / 50bcb9f |
| shellcheck | v0.4.5 |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5155/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [C++] HBase Client and Table Implementation
> ---
>
> Key: HBASE-17315
> URL: https://issues.apache.org/jira/browse/HBASE-17315
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-17315.HBASE-14850.v1.patch, 
> HBASE-17315.HBASE-14850.v2.patch, HBASE-17315.HBASE-14850.v3.patch, 
> HBASE-17315.HBASE-14850.v4.patch
>
>
> Consists of Client and Table implementation which will be used to call the 
> corresponding client methods i.e Get, Gets, Scan etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17315) [C++] HBase Client and Table Implementation

2017-01-05 Thread Sudeep Sunthankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudeep Sunthankar updated HBASE-17315:
--
Attachment: HBASE-17315.HBASE-14850.v4.patch

Hi, This patch consists of the foll:-

# Removed ResultScanner and Scan dependecies from Table
# Changes in Method names of Client class.
# Table will be passed, LocationCache, Configuration and RpcClient during 
construction.
# Unit tests hooked up to client.
# rpc-client.(cc/h) was not included in connection/BUCK, due to which 
compilation with BUCK was failing on a clean checkout. Have fixed that as well.
# Client can be constructed with or without Configuration instance.

Thanks.

> [C++] HBase Client and Table Implementation
> ---
>
> Key: HBASE-17315
> URL: https://issues.apache.org/jira/browse/HBASE-17315
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-17315.HBASE-14850.v1.patch, 
> HBASE-17315.HBASE-14850.v2.patch, HBASE-17315.HBASE-14850.v3.patch, 
> HBASE-17315.HBASE-14850.v4.patch
>
>
> Consists of Client and Table implementation which will be used to call the 
> corresponding client methods i.e Get, Gets, Scan etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803286#comment-15803286
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

bq. Should we do a follow up for the server side as well
Do you mean the servers process the partial rows, and then return exception for 
making client retry the remaining rows?

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17388) Move ReplicationPeer and other replication related PB messages to the replication.proto

2017-01-05 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17388:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Move ReplicationPeer and other replication related PB messages to the 
> replication.proto
> ---
>
> Key: HBASE-17388
> URL: https://issues.apache.org/jira/browse/HBASE-17388
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17388.patch, HBASE-17388.patch, HBASE-17388.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17388) Move ReplicationPeer and other replication related PB messages to the replication.proto

2017-01-05 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803258#comment-15803258
 ] 

Guanghao Zhang commented on HBASE-17388:


Pushed to master. Thanks [~enis] for review.

> Move ReplicationPeer and other replication related PB messages to the 
> replication.proto
> ---
>
> Key: HBASE-17388
> URL: https://issues.apache.org/jira/browse/HBASE-17388
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17388.patch, HBASE-17388.patch, HBASE-17388.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-01-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: HBASE-14614.master.004.patch

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.001.patch, 
> HBASE-14614.master.002.patch, HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15414) Bound the size of multi request returns and/or allow return of partial result up to client

2017-01-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803028#comment-15803028
 ] 

Enis Soztutar commented on HBASE-15414:
---

Thinking about this in the context of HBASE-17408, we still need to protect the 
server from a single get result returning a huge row, thus causing OOM. It 
happened in a couple of instances where the application without knowing whether 
the row grew too large or not does a standard get. Plus since there is no 
protection for how large a row can get, and no way to know without actually 
doing the get, applications cannot know or recover easily from the situation. 

Even with HBASE-14946, the heap size checks are done between Get requests, not 
internally like we do with Scans. Returning partial results from multi-get 
seems a big complication, but at least we can look into protecting the server 
by hooking into the scan limits and throwing an exception back to the 
application. 

> Bound the size of multi request returns and/or allow return of partial result 
> up to client
> --
>
> Key: HBASE-15414
> URL: https://issues.apache.org/jira/browse/HBASE-15414
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, rpc
>Reporter: stack
>
> Some knowledgeable hbase users note that while Scanning now allows you return 
> results in 'chunks' for assembly client-side as a whole result (or the 
> application can see the partials as they come out of the cluster), this 
> ability is absent if you do a multi-get; you might get back more than you 
> bargained for and just as chunking when Scanning makes sense because it makes 
> hbase 'regular', we need the same for multiget.
> Parking an issue here for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803008#comment-15803008
 ] 

Enis Soztutar commented on HBASE-17408:
---

The patch looks good for the client side. 

Should we do a follow up for the server side as well, similar to HBASE-14946? 
Because not all clients will go through AP (async client, and C++, etc). 





> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802978#comment-15802978
 ] 

Hadoop QA commented on HBASE-14614:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 55 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 18m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 8s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 40s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s 
{color} | {color:red} hbase-rsgroup in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 42s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s 
{color} | {color:red} hbase-rsgroup in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 42s {color} | 
{color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 15s {color} | 
{color:red} hbase-rsgroup in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 42s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 15s {color} 
| {color:red} hbase-rsgroup in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 19m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 59 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 31s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 2s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 22s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 39s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.6.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 57s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 14s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 29s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.7.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 46s 
{color} | {color:red} The patch causes 74 errors with Hadoop v2.7.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 3s 

[jira] [Commented] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802962#comment-15802962
 ] 

Hadoop QA commented on HBASE-15995:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hbase-server generated 2 new + 1 unchanged - 0 fixed = 3 
total (was 1) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 35s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestSerialReplication |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845848/HBASE-15995.master.v1.patch
 |
| JIRA Issue | HBASE-15995 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a738ba058666 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0f6c79e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5151/artifact/patchprocess/diff-javadoc-javadoc-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5151/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5151/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5151/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802943#comment-15802943
 ] 

Hudson commented on HBASE-16710:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK7 #1833 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1833/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
3cbc5cc9ebbd2d0448e013f8ad7fbde28220c602)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802911#comment-15802911
 ] 

Hudson commented on HBASE-16710:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #80 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/80/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
8f0d0e78cc79d0b25668d39dc3b94213c95c9a42)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802893#comment-15802893
 ] 

Hudson commented on HBASE-16710:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2265 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2265/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
0f6c79eb123e43133df4f4ba2a123029d62580dc)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802881#comment-15802881
 ] 

Vincent Poon commented on HBASE-15995:
--

Good question, I used nload with the default scale, where 100% is 10240 kBit/s 
which I guess is too low (this was AWS az to another az).  But if you scroll 
right in the image the bottom right stats show average network usage, and it's 
higher in V2.

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch, 
> replicationV1_100ms_delay.png, replicationV2_100ms_delay.png
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802871#comment-15802871
 ] 

Hudson commented on HBASE-16710:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1917 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1917/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
3cbc5cc9ebbd2d0448e013f8ad7fbde28220c602)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802855#comment-15802855
 ] 

Ted Yu commented on HBASE-15995:


Is the height of the high bar in V2 tantamount to the height of high bar in V1 ?

Thanks 

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch, 
> replicationV1_100ms_delay.png, replicationV2_100ms_delay.png
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17424) Protect REST client against malicious XML responses.

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802827#comment-15802827
 ] 

Ted Yu commented on HBASE-17424:


Looks good overall.
{code}
420 } catch (XMLStreamException e) {
421   throw new IOException("Failed to read XML", e);
{code}
Consider defining a String constant in place of the hardcoded message on line 
421 so that the test can extract exception message and verify that the newly 
added check works

> Protect REST client against malicious XML responses.
> 
>
> Key: HBASE-17424
> URL: https://issues.apache.org/jira/browse/HBASE-17424
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-17424.001.patch
>
>
> If, by some means, an unsuspecting REST server client would get a malformed 
> response from the REST server, it could result in the client performing some 
> unintended action from the XML parsing.
> We should disable these extra options on the XML parser to prevent the 
> possibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802818#comment-15802818
 ] 

Vincent Poon edited comment on HBASE-15995 at 1/5/17 11:11 PM:
---

Example of network usage.  X-axis is time and Y-axis is network usage.

The patch keeps the network utilization higher while replicating.  V1 has wider 
gaps while reading/filtering the next batch.

V1:
!replicationV1_100ms_delay.png!

V2:
!replicationV2_100ms_delay.png!


was (Author: vincentpoon):
Example of network usage.  X-axis is time and Y-axis is network usage.

The patch keeps the network utilization higher while replicating.  V1 has wider 
gaps while reading/filtering the next batch.

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch, 
> replicationV1_100ms_delay.png, replicationV2_100ms_delay.png
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-15995:
-
Attachment: replicationV2_100ms_delay.png
replicationV1_100ms_delay.png

Example of network usage.  X-axis is time and Y-axis is network usage.

The patch keeps the network utilization higher while replicating.  V1 has wider 
gaps while reading/filtering the next batch.

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch, 
> replicationV1_100ms_delay.png, replicationV2_100ms_delay.png
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802802#comment-15802802
 ] 

Vincent Poon commented on HBASE-15995:
--

https://reviews.apache.org/r/55235/

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802777#comment-15802777
 ] 

Hudson commented on HBASE-16710:


FAILURE: Integrated in Jenkins build HBase-1.3-IT #816 (See 
[https://builds.apache.org/job/HBase-1.3-IT/816/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
2e344a7c91df14f14aecc74c5c430f73c92ed7c6)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-01-05 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: HBASE-14614.master.003.patch

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.001.patch, 
> HBASE-14614.master.002.patch, HBASE-14614.master.003.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802723#comment-15802723
 ] 

Hudson commented on HBASE-16710:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #86 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/86/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
2e344a7c91df14f14aecc74c5c430f73c92ed7c6)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802715#comment-15802715
 ] 

Ted Yu commented on HBASE-15995:


Results are encouraging.

Mind putting the patch on review board ?

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17430) dead links in ref guide to class javadocs that moved out of user APIs.

2017-01-05 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-17430:
---

 Summary: dead links in ref guide to class javadocs that moved out 
of user APIs.
 Key: HBASE-17430
 URL: https://issues.apache.org/jira/browse/HBASE-17430
 Project: HBase
  Issue Type: Bug
  Components: documentation, website
Reporter: Sean Busbey


The ref guide currently has one or more links to some javadocs that are no 
longer in our user apis. unfortunately they link via a google search which 
prevents the dead link detector from finding them.

e.g. in architecture.adoc

{code}
Instead, they create small files similar to symbolic link files, named 
link:http://www.google.com/url?q=http%3A%2F%2Fhbase.apache.org%2Fapidocs%2Forg%2Fapache%2Fhadoop%2Fhbase%2Fio%2FReference.html=D=1=AFQjCNEkCbADZ3CgKHTtGYI8bJVwp663CA[Reference
 files], which point to either the top or bottom part of the parent store file 
according to the split point.
{code}

We should instead directly link to the correct location, e.g. 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/Reference.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802674#comment-15802674
 ] 

Hudson commented on HBASE-16710:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #93 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/93/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
8f0d0e78cc79d0b25668d39dc3b94213c95c9a42)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802668#comment-15802668
 ] 

Hudson commented on HBASE-16710:


FAILURE: Integrated in Jenkins build HBase-1.4 #584 (See 
[https://builds.apache.org/job/HBase-1.4/584/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
667c5eb3a08a3ba798e7784ac4d5bea2b32206ff)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-15995:
-
Status: Patch Available  (was: Open)

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802591#comment-15802591
 ] 

Hudson commented on HBASE-16710:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK8 #80 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/80/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
2e344a7c91df14f14aecc74c5c430f73c92ed7c6)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17410) Use isEmpty instead of size() == 0 in hbase-client

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802398#comment-15802398
 ] 

Hudson commented on HBASE-17410:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2264 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2264/])
HBASE-17410 Changed size() == 0 to isEmpty in hbase-client (elserj: rev 
df98d8dcd76835e59fe6df43197308215028a41e)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java
* (edit) 
hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableMultiplexer.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java


> Use isEmpty instead of size() == 0 in hbase-client
> --
>
> Key: HBASE-17410
> URL: https://issues.apache.org/jira/browse/HBASE-17410
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17410.master.001.patch
>
>
> Use {code:java}.isEmpty(){code} instead of {code:java}size() == 0{code} when 
> possible in the *hbase-client* module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated HBASE-15995:
-
Attachment: HBASE-15995.master.v1.patch

This patch does two things

1)  Puts replication reading into a separate thread.  This is done by 
ReplicationWALEntryBatcher, which reads entries and puts them onto a queue.  
ReplicationSourceWorkerThread is reduced to simply reading batches off the 
queue and shipping them.

2)  Puts the actual WAL entry reading logic in a WALEntryStream class.  This 
implements iterator.  Eventually when we have a way to stream over the network, 
we can get rid of the batcher above and simplify to something like
{code}
while(entryStream.hasNext()) {
  shipEntry(entryStream.next());
}
{code}

I tried to keep the rest of the logic the same as what currently exists.  We 
could put ReplicationSource into another class ReplicationSourceV2 if so 
desired.

I believe all replication tests pass except TestGlobalThrottler.  This is 
because one thread is currently reading a batch, and the other thread is 
shipping the last batch, so even if your queue holds only 1 batch, you're using 
double the memory.  (If I modify the test to double the threshold, it passes)

I've done performance testing by setting up a single standalone region server 
shipping to a remote cluster, and then running PerformanceEvaluation to 
generate 3gb of data.  The amount of time for replication to catch up:
ReplicationSourceV1-190s   (source.size.capacity of 64mb)
ReplicationSourceV2-160s   (source.size.capacity of 32mb, with queue 
size of 1 so that the max memory used should be 64mb)

There's better performance in situations where reading or filtering entries is 
more expensive (e.g. contention for disk/cpu).  For example, I tried 
introducing a 100ms delay in a custom entry filter.  
ReplicationSourceV1  -  366s
ReplicationSourceV2  -  236s


> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
> Attachments: HBASE-15995.master.v1.patch
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802321#comment-15802321
 ] 

Hudson commented on HBASE-16710:


FAILURE: Integrated in Jenkins build HBase-1.2-IT #582 (See 
[https://builds.apache.org/job/HBase-1.2-IT/582/])
HBASE-16710 Add ZStandard Codec to Compression.java (apurtell: rev 
2e344a7c91df14f14aecc74c5c430f73c92ed7c6)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java


> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802292#comment-15802292
 ] 

Hadoop QA commented on HBASE-17290:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 4s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 80m 47s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 123m 36s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845828/HBASE-17290.v1.patch |
| JIRA Issue | HBASE-17290 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 45b15a51594d 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / df98d8d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5149/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5149/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Potential loss of data for replication of bulk 

[jira] [Resolved] (HBASE-17395) [C++] Use custom line wrapping in formatting

2017-01-05 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-17395.
---
Resolution: Fixed

Thanks Sudeep, pushed the addendum v2 patch. 

> [C++] Use custom line wrapping in formatting
> 
>
> Key: HBASE-17395
> URL: https://issues.apache.org/jira/browse/HBASE-17395
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Fix For: HBASE-14850
>
> Attachments: HBASE-17395.HBASE-14850.v1.patch, 
> HBASE-17395.HBASE-14850.v2.patch
>
>
> We use default line wrapping as per Google styling. We are changing it to 100 
> in code formatting script i.e. bin/format-code.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802279#comment-15802279
 ] 

Hadoop QA commented on HBASE-17408:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 79m 26s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 119m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845826/HBASE-17408.v2.patch |
| JIRA Issue | HBASE-17408 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux bd297f0b798c 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / df98d8d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5150/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5150/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Introduce per request limit by number of mutations
> 

[jira] [Commented] (HBASE-17388) Move ReplicationPeer and other replication related PB messages to the replication.proto

2017-01-05 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802277#comment-15802277
 ] 

Enis Soztutar commented on HBASE-17388:
---

Skimmed the patch, looks good. 

> Move ReplicationPeer and other replication related PB messages to the 
> replication.proto
> ---
>
> Key: HBASE-17388
> URL: https://issues.apache.org/jira/browse/HBASE-17388
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17388.patch, HBASE-17388.patch, HBASE-17388.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15995) Separate replication WAL reading from shipping

2017-01-05 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned HBASE-15995:


Assignee: Vincent Poon

> Separate replication WAL reading from shipping
> --
>
> Key: HBASE-15995
> URL: https://issues.apache.org/jira/browse/HBASE-15995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Fix For: 2.0.0
>
>
> Currently ReplicationSource reads edits from the WAL and ships them in the 
> same thread.
> By breaking out the reading from the shipping, we can introduce greater 
> parallelism and lay the foundation for further refactoring to a pipelined, 
> streaming model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-16710:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.9
   1.2.5
   1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to 1.x and master. 

> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.9
>
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16710) Add ZStandard Codec to Compression.java

2017-01-05 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802123#comment-15802123
 ] 

Andrew Purtell commented on HBASE-16710:


I think we are good for commit. Doing so...

> Add ZStandard Codec to Compression.java
> ---
>
> Key: HBASE-16710
> URL: https://issues.apache.org/jira/browse/HBASE-16710
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: churro morales
>Assignee: churro morales
>Priority: Minor
> Attachments: HBASE-16710-0.98.patch, HBASE-16710-1.2.patch, 
> HBASE-16710.patch
>
>
> HADOOP-13578 is adding the ZStandardCodec to hadoop.  This is a placeholder 
> to ensure it gets added to hbase once this gets upstream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17409) Re-fix XSS request issue in JMXJsonServlet

2017-01-05 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17409:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the review Ted (and maybe Sean ;))

> Re-fix XSS request issue in JMXJsonServlet
> --
>
> Key: HBASE-17409
> URL: https://issues.apache.org/jira/browse/HBASE-17409
> Project: HBase
>  Issue Type: Sub-task
>  Components: security, UI
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17409.001.patch, HBASE-17409.002.patch
>
>
> I have a patch here which should mitigate the XSS issue in this servlet 
> without the use of owasp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802012#comment-15802012
 ] 

Ted Yu commented on HBASE-17290:


Latest patch looks good - assuming TestMasterReplication passes.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch, HBASE-17290.v1.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15802009#comment-15802009
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

programmers work at night :D

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801993#comment-15801993
 ] 

Ted Yu commented on HBASE-17408:


+1 on v2, pending QA.

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801989#comment-15801989
 ] 

Ted Yu commented on HBASE-17408:


It's already 1am in Taiwan, right ?

Please take your time in coming up with patch(es) :-)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801949#comment-15801949
 ] 

Ashish Singhi commented on HBASE-17290:
---

Addressed the comments.
Please review.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch, HBASE-17290.v1.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-17290:
--
Attachment: HBASE-17290.v1.patch

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch, HBASE-17290.v1.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Patch Available  (was: Open)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Attachment: HBASE-17408.v2.patch

address [~yuzhih...@gmail.com]'s comment on v2

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch, 
> HBASE-17408.v2.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Open  (was: Patch Available)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801903#comment-15801903
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

copy that

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801896#comment-15801896
 ] 

Ted Yu commented on HBASE-17408:


bq. The heap size of row (rowSize) is useless for RequestRowsChecker.

I agree.
Mind changing the parameter name to heapSizeOfRow - just to make it clearer.

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801889#comment-15801889
 ] 

Ted Yu commented on HBASE-17290:


The LOG in that catch block is at error level.
Consider changing it to DEBUG since the absence of hfile implies error in the 
commit step (which should be remedied by operator retry).

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801873#comment-15801873
 ] 

Ashish Singhi commented on HBASE-17290:
---

{quote} Can you point me to the code which handles the case where Path for bulk 
loaded hfile is recorded but the commit (move of hfile) fails ?
In that scenario, the file wouldn't be found at time of replication. {quote}
In that scenario we will end up here, 
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HFileReplicator.java#L380

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801852#comment-15801852
 ] 

Ted Yu commented on HBASE-17290:


Please enrich the javadoc with explanation for the second component of the Pair.
{code}
+   * @param pairs list of hfile references to be added
{code}
Please make the parameter name consistent - should be pairs
{code}
+  public void addHFileRefs(String peerId, List> files)
{code}
For ReplicationObserver.java, please reference the format of license header of 
existing classes.
Consider adding 
@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG) to this class.
{code}
+  LOG.debug("Skipping recording bulk load entries in preCommitStoreFile 
for bulkloaded "
+  + "data replication.");
{code}
It would be better if the case for bulk load replication and the case where 
pairs is empty are logged separately.
This would facilitate troubleshooting.

Can you point me to the code which handles the case where Path for bulk loaded 
hfile is recorded but the commit (move of hfile) fails ?
In that scenario, the file wouldn't be found at time of replication.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17410) Use isEmpty instead of size() == 0 in hbase-client

2017-01-05 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17410:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Jan.

> Use isEmpty instead of size() == 0 in hbase-client
> --
>
> Key: HBASE-17410
> URL: https://issues.apache.org/jira/browse/HBASE-17410
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17410.master.001.patch
>
>
> Use {code:java}.isEmpty(){code} instead of {code:java}size() == 0{code} when 
> possible in the *hbase-client* module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801844#comment-15801844
 ] 

ChiaPing Tsai commented on HBASE-17408:
---

bq. Please move the above check immediately below where this.maxRowsPerRequest 
is assigned.
copy that.

bq. Why is 1 used in the last line above ?
It means that an extra row is accepted, so we increment the row count by one.

bq. rowSize has no effect ?
Yes, RequestRowsChecker only consider the number of rows. The heap size of row 
(rowSize) is useless for RequestRowsChecker.

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801828#comment-15801828
 ] 

Hadoop QA commented on HBASE-17381:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 37s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845801/HBASE-17381.patch |
| JIRA Issue | HBASE-17381 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 1d4e7de18284 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dba103e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5148/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5148/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary 

[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801798#comment-15801798
 ] 

Ted Yu commented on HBASE-17408:


Thanks for taking this JIRA.
{code}
+if (this.maxRowsPerRequest <= 0) {
+  throw new IllegalArgumentException("maxRowsPerRequest="
{code}
Please move the above check immediately below where this.maxRowsPerRequest is 
assigned.
{code}
+public void notifyFinal(ReturnCode code, HRegionLocation loc, long 
rowSize) {
+  if (code == ReturnCode.INCLUDE) {
+long currentRows = serverRows.containsKey(loc.getServerName())
+? serverRows.get(loc.getServerName()) : 0L;
+serverRows.put(loc.getServerName(), currentRows + 1);
{code}
Why is 1 used in the last line above ? rowSize has no effect ?


> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801731#comment-15801731
 ] 

Hadoop QA commented on HBASE-17408:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 28s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m 33s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 123m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845797/HBASE-17408.v1.patch |
| JIRA Issue | HBASE-17408 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8171d10e0fc7 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dba103e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5147/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5147/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Introduce per request limit by number of 

[jira] [Commented] (HBASE-17419) Use isEmpty instead of size() == 0 in hbase-server

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801622#comment-15801622
 ] 

Hadoop QA commented on HBASE-17419:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 43s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m 53s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 120m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845791/HBASE-17419.master.001.patch
 |
| JIRA Issue | HBASE-17419 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 31e8c9ee7993 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dba103e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5146/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5146/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Use isEmpty instead of size() == 0 in hbase-server
> --
>
> Key: HBASE-17419
> URL: https://issues.apache.org/jira/browse/HBASE-17419
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Attachments: HBASE-17419.master.001.patch
>
>
> Use {code:java}.isEmpty(){code} instead 

[jira] [Commented] (HBASE-17379) Lack of synchronization in CompactionPipeline#getScanners()

2017-01-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801612#comment-15801612
 ] 

stack commented on HBASE-17379:
---

Do what you think best. Sub-task or new issue.

Regards this issue, you do not have to go via [~te...@apache.org]'s 
complicating 'interpretation' of your locking prescription. Feel free to post 
your own patch. Thanks.



> Lack of synchronization in CompactionPipeline#getScanners()
> ---
>
> Key: HBASE-17379
> URL: https://issues.apache.org/jira/browse/HBASE-17379
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 17379.v1.txt, 17379.v14.txt, 17379.v2.txt, 17379.v3.txt, 
> 17379.v4.txt, 17379.v5.txt, 17379.v6.txt, 17379.v8.txt
>
>
> From 
> https://builds.apache.org/job/PreCommit-HBASE-Build/5053/testReport/org.apache.hadoop.hbase.regionserver/TestHRegionWithInMemoryFlush/testWritesWhileGetting/
>  :
> {code}
> java.io.IOException: java.util.ConcurrentModificationException
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.handleException(HRegion.java:5886)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5856)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5819)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2786)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2766)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7036)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7015)
>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6994)
>   at 
> org.apache.hadoop.hbase.regionserver.TestHRegion.testWritesWhileGetting(TestHRegion.java:4141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.ConcurrentModificationException: null
>   at 
> java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966)
>   at java.util.LinkedList$ListItr.next(LinkedList.java:888)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactionPipeline.getScanners(CompactionPipeline.java:220)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactingMemStore.getScanners(CompactingMemStore.java:298)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanners(HStore.java:1154)
>   at org.apache.hadoop.hbase.regionserver.Store.getScanners(Store.java:97)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.getScannersNoCompaction(StoreScanner.java:353)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:210)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:1892)
>   at 
> 

[jira] [Commented] (HBASE-17429) HBase bulkload cannot support HDFS viewFs

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801584#comment-15801584
 ] 

Ted Yu commented on HBASE-17429:


Does the patch solve the problem for you ?
{code}
76  String nameService = 
serviceName.substring(serviceName.indexOf(":") + 1);
{code}
Should check for return value of indexOf() be added ?

> HBase bulkload cannot support HDFS viewFs
> -
>
> Key: HBASE-17429
> URL: https://issues.apache.org/jira/browse/HBASE-17429
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6.1, 1.2.4
> Environment: CDH5.7.0 hbase0.98.6
>Reporter: shenxianqiang
> Attachments: HBASE-17429.patch
>
>
> Since hadoop cluster suppor fedration,hbase bulkload performance degrades 
> dramatically. Even if hbase directory and bulkload directory in the same 
> nameservice.
> {quote}
> 2017-01-04 21:58:40,919 INFO 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem: use bulkload file 
> name is : hdfs://CloudTestNameNode2:8020
> 2017-01-04 21:58:40,924 ERROR org.apache.hadoop.ipc.RpcServer: Unexpected 
> throwable object 
> java.lang.IllegalArgumentException: Wrong FS: 
> viewfs://nsX/user/test/I/_tmp/9cde5dde60374b1483b7d09b65258304.top, expected: 
> hdfs://CloudTestNameNode2:8020
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:657)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1215)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211)
> at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1412)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:373)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:496)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:708)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3658)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3564)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3378)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29589)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801578#comment-15801578
 ] 

Ted Yu commented on HBASE-17290:


Test failure was caused by:
{code}
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hbase.replication.TestMasterReplication.startMiniClusters(TestMasterReplication.java:492)
at 
org.apache.hadoop.hbase.replication.TestMasterReplication.testHFileMultiSlaveReplication(TestMasterReplication.java:331)
{code}
Please fix in the next patch.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17407) Correct update of maxFlushedSeqId in HRegion

2017-01-05 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801513#comment-15801513
 ] 

stack commented on HBASE-17407:
---

bq. (but for some reason the comment was deleted).

I intentionally deleted the comment because I felt it added little benefit to 
the back-and-forth here.

bq. I think it was important to understand that in the current state there is 
no danger of data loss.

You mean with the finalizeFlush/updateStore calls in place and NO inmemory 
compaction -- just BASIC mode where we flush all in the pipeline?

If the above, I think so. That said, the finalizeFlush/updateStore calls are 
new moving pieces and this corner cases are hard to manufacture.

bq. Code maintainability is also important.

Yes. This sequenceid accounting is unfortunately involved and tough to test.

bq. I can replace finalizeFlush with a preFlushSeqIDEstimation() which returns 
a lower bound on the sequence id that is invoked before we start the flush. 

You think this will restore our sequence id accounting to what it was before 
finalizeFlush/updateStore ?  How will we deal with the gap between the new 
edits coming in filling lowestUnflushedSequenceIds  after we have swapped it 
out to do the current and the edits in the pipeline that did not get flushed 
during the current flush session?

bq. You say WAL truncation cannot be triggered during a flush. 

Indeed. See how closeBarrier is used in AbstractFSWAL

bq. Can the map in seq accounting be reported to master during a flush?

See HRegion#setCompleteSequenceId where we build our sequenceid to send to the 
master.  See how it asks the WAL subsystem for earliest edit by column family:

  long earliest = this.wal.getEarliestMemstoreSeqNum(encodedRegionName, 
familyName);

Here is the implementation:

 {code}
 @Override
  public long getEarliestMemstoreSeqNum(byte[] encodedRegionName, byte[] 
familyName) {
// This method is used by tests and for figuring if we should flush or not 
because our
// sequenceids are too old. It is also used reporting the master our oldest 
sequenceid for use
// figuring what edits can be skipped during log recovery. 
getEarliestMemStoreSequenceId
// from this.sequenceIdAccounting is looking first in 
flushingOldestStoreSequenceIds, the
// currently flushing sequence ids, and if anything found there, it is 
returning these. This is
// the right thing to do for the reporting oldest sequenceids to master; we 
won't skip edits if
// we crash during the flush. For figuring what to flush, we might get 
requeued if our sequence
// id is old even though we are currently flushing. This may mean we do too 
much flushing.
return this.sequenceIdAccounting.getLowestSequenceId(encodedRegionName, 
familyName);
  }
{code}

It tries to explain how it works.

That it returns flushingSequenceIds and then lowestUnflushedSequenceIds if 
former is not present may be what [~Apache9] is referring to in the 'not report 
the value if a flush ongoing' (I did not see a block on reporting during 
'flush' -- maybe I'm looking in wrong place).

Thanks.

> Correct update of maxFlushedSeqId in HRegion
> 
>
> Key: HBASE-17407
> URL: https://issues.apache.org/jira/browse/HBASE-17407
> Project: HBase
>  Issue Type: Bug
>Reporter: Eshcar Hillel
>
> The attribute maxFlushedSeqId in HRegion is used to track the max sequence id 
> in the store files and is reported to HMaster. When flushing only part of the 
> memstore content this value might be incorrect and may cause data loss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801495#comment-15801495
 ] 

Hadoop QA commented on HBASE-17290:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 35s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 7s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 125m 5s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestMasterReplication |
|   | hadoop.hbase.regionserver.TestHRegion |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845785/HBASE-17290.patch |
| JIRA Issue | HBASE-17290 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux ef0fbfb7688c 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dba103e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5145/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5145/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  

[jira] [Updated] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2017-01-05 Thread huzheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huzheng updated HBASE-17381:

Assignee: huzheng
  Status: Patch Available  (was: Open)

> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary Helmling
>Assignee: huzheng
> Attachments: HBASE-17381.patch
>
>
> If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
> run() method (for example failure to allocate direct memory for the DFS 
> client), the exception will be logged by the UncaughtExceptionHandler, but 
> the thread will also die and the replication queue will back up indefinitely 
> until the Regionserver is restarted.
> We should make sure the worker thread is resilient to all exceptions that it 
> can actually handle.  For those that it really can't, it seems better to 
> abort the regionserver rather than just allow replication to stop with 
> minimal signal.
> Here is a sample exception:
> {noformat}
> ERROR regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, 
> currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
> at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2017-01-05 Thread huzheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801468#comment-15801468
 ] 

huzheng edited comment on HBASE-17381 at 1/5/17 2:22 PM:
-

[~ghelmling] I upload a patch to abort region server if OOME occur.  for other 
exception cases,  I throw them (ReplicationSourceWorkerThread exit and region 
server keep running) because it seems hard to make sure  whether  it's 
recoverable case or not .


was (Author: openinx):
[~ghelmling] I upload a patch to abort region server if OOME occur.  for other 
exception cases,  I throw them because it seems hard to make sure  whether  
it's recoverable case or not (ReplicationSourceWorkerThread exit and region 
server keep running).

> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary Helmling
> Attachments: HBASE-17381.patch
>
>
> If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
> run() method (for example failure to allocate direct memory for the DFS 
> client), the exception will be logged by the UncaughtExceptionHandler, but 
> the thread will also die and the replication queue will back up indefinitely 
> until the Regionserver is restarted.
> We should make sure the worker thread is resilient to all exceptions that it 
> can actually handle.  For those that it really can't, it seems better to 
> abort the regionserver rather than just allow replication to stop with 
> minimal signal.
> Here is a sample exception:
> {noformat}
> ERROR regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, 
> currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
> at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2017-01-05 Thread huzheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801468#comment-15801468
 ] 

huzheng commented on HBASE-17381:
-

[~ghelmling] I upload a patch to abort region server if OOME occur.  for other 
exception cases,  I throw them because it seems hard to make sure  whether  
it's recoverable case or not (ReplicationSourceWorkerThread exit and region 
server keep running).

> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary Helmling
> Attachments: HBASE-17381.patch
>
>
> If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
> run() method (for example failure to allocate direct memory for the DFS 
> client), the exception will be logged by the UncaughtExceptionHandler, but 
> the thread will also die and the replication queue will back up indefinitely 
> until the Regionserver is restarted.
> We should make sure the worker thread is resilient to all exceptions that it 
> can actually handle.  For those that it really can't, it seems better to 
> abort the regionserver rather than just allow replication to stop with 
> minimal signal.
> Here is a sample exception:
> {noformat}
> ERROR regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, 
> currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
> at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2017-01-05 Thread huzheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huzheng updated HBASE-17381:

Attachment: HBASE-17381.patch

> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary Helmling
> Attachments: HBASE-17381.patch
>
>
> If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
> run() method (for example failure to allocate direct memory for the DFS 
> client), the exception will be logged by the UncaughtExceptionHandler, but 
> the thread will also die and the replication queue will back up indefinitely 
> until the Regionserver is restarted.
> We should make sure the worker thread is resilient to all exceptions that it 
> can actually handle.  For those that it really can't, it seems better to 
> abort the regionserver rather than just allow replication to stop with 
> minimal signal.
> Here is a sample exception:
> {noformat}
> ERROR regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, 
> currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
> at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2017-01-05 Thread huzheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huzheng updated HBASE-17381:

Status: Open  (was: Patch Available)

> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary Helmling
>
> If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
> run() method (for example failure to allocate direct memory for the DFS 
> client), the exception will be logged by the UncaughtExceptionHandler, but 
> the thread will also die and the replication queue will back up indefinitely 
> until the Regionserver is restarted.
> We should make sure the worker thread is resilient to all exceptions that it 
> can actually handle.  For those that it really can't, it seems better to 
> abort the regionserver rather than just allow replication to stop with 
> minimal signal.
> Here is a sample exception:
> {noformat}
> ERROR regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, 
> currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
> at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17381) ReplicationSourceWorkerThread can die due to unhandled exceptions

2017-01-05 Thread huzheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huzheng updated HBASE-17381:

Status: Patch Available  (was: Open)

> ReplicationSourceWorkerThread can die due to unhandled exceptions
> -
>
> Key: HBASE-17381
> URL: https://issues.apache.org/jira/browse/HBASE-17381
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Gary Helmling
>
> If a ReplicationSourceWorkerThread encounters an unexpected exception in the 
> run() method (for example failure to allocate direct memory for the DFS 
> client), the exception will be logged by the UncaughtExceptionHandler, but 
> the thread will also die and the replication queue will back up indefinitely 
> until the Regionserver is restarted.
> We should make sure the worker thread is resilient to all exceptions that it 
> can actually handle.  For those that it really can't, it seems better to 
> abort the regionserver rather than just allow replication to stop with 
> minimal signal.
> Here is a sample exception:
> {noformat}
> ERROR regionserver.ReplicationSource: Unexpected exception in 
> ReplicationSourceWorkerThread, 
> currentPath=hdfs://.../hbase/WALs/XXXwalfilenameXXX
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:693)
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:96)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:113)
> at 
> org.apache.hadoop.crypto.CryptoOutputStream.(CryptoOutputStream.java:108)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.createStreamPair(DataTransferSaslUtil.java:344)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:490)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:391)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:263)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.peerSend(SaslDataTransferClient.java:160)
> at 
> org.apache.hadoop.hdfs.net.TcpPeerServer.peerFromSocketAndKey(TcpPeerServer.java:92)
> at org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3444)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:778)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:695)
> at 
> org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:356)
> at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:673)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:308)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
> at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Attachment: HBASE-17408.v1.patch

v1 adds trivial change in the hbase-server module. 

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Patch Available  (was: Open)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch, HBASE-17408.v1.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17408) Introduce per request limit by number of mutations

2017-01-05 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17408:
--
Status: Open  (was: Patch Available)

> Introduce per request limit by number of mutations
> --
>
> Key: HBASE-17408
> URL: https://issues.apache.org/jira/browse/HBASE-17408
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Ted Yu
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17408.v0.patch
>
>
> HBASE-16224 introduced hbase.client.max.perrequest.heapsize to limit the 
> amount of data sent from client.
> We should consider adding per request limit through the number of mutations 
> in a batch.
> In recent troubleshooting sessions, customer had to do this in their 
> application code to avoid OOME on the server side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14061) Support CF-level Storage Policy

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801374#comment-15801374
 ] 

Ashish Singhi edited comment on HBASE-14061 at 1/5/17 1:40 PM:
---

{code}
/**
1280   * Return the encryption algorithm in use by this family
1281   * 
1282   * Not using {@code enum} here because HDFS is not using {@code enum} 
for storage policy, see
1283   * 
org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite for more 
details
1284   */
1285  public String getStoragePolicy() {
1286return getValue(STORAGE_POLICY);
1287  }

 /**
1290   * Set the encryption algorithm for use with this family
1291   * @param policy
1292   */
1293  public HColumnDescriptor setStoragePolicy(String policy) {
1294setValue(STORAGE_POLICY, policy);
1295return this;
1296  }
{code}
That javadoc is for HCD#get and setEncryptionType, need to correct it.
Otheriswe LGTM.


was (Author: ashish singhi):
{code}
/**
1280   * Return the encryption algorithm in use by this family
1281   * 
1282   * Not using {@code enum} here because HDFS is not using {@code enum} 
for storage policy, see
1283   * 
org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite for more 
details
1284   */
1285  public String getStoragePolicy() {
1286return getValue(STORAGE_POLICY);
1287  }

 /**
1290   * Set the encryption algorithm for use with this family
1291   * @param policy
1292   */
1293  public HColumnDescriptor setStoragePolicy(String policy) {
1294setValue(STORAGE_POLICY, policy);
1295return this;
1296  }
{code}
That javadoc is for HCD#getEncryptionType, need to correct it.
Otheriswe LGTM.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801374#comment-15801374
 ] 

Ashish Singhi commented on HBASE-14061:
---

{code}
/**
1280   * Return the encryption algorithm in use by this family
1281   * 
1282   * Not using {@code enum} here because HDFS is not using {@code enum} 
for storage policy, see
1283   * 
org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite for more 
details
1284   */
1285  public String getStoragePolicy() {
1286return getValue(STORAGE_POLICY);
1287  }

 /**
1290   * Set the encryption algorithm for use with this family
1291   * @param policy
1292   */
1293  public HColumnDescriptor setStoragePolicy(String policy) {
1294setValue(STORAGE_POLICY, policy);
1295return this;
1296  }
{code}
That javadoc is for HCD#getEncryptionType, need to correct it.
Otheriswe LGTM.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15172) Support setting storage policy in bulkload

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801353#comment-15801353
 ] 

Ashish Singhi commented on HBASE-15172:
---

+1

> Support setting storage policy in bulkload
> --
>
> Key: HBASE-15172
> URL: https://issues.apache.org/jira/browse/HBASE-15172
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-15172.patch, HBASE-15172.v2.patch
>
>
> When using tiered HFile storage, we should be able to generating hfile with 
> correct storage type during bulkload. This JIRA is targeting at making it 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HBASE-17419) Use isEmpty instead of size() == 0 in hbase-server

2017-01-05 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-17419 started by Jan Hentschel.
-
> Use isEmpty instead of size() == 0 in hbase-server
> --
>
> Key: HBASE-17419
> URL: https://issues.apache.org/jira/browse/HBASE-17419
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Attachments: HBASE-17419.master.001.patch
>
>
> Use {code:java}.isEmpty(){code} instead of {code:java}size() == 0{code} when 
> possible in the *hbase-server* module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17419) Use isEmpty instead of size() == 0 in hbase-server

2017-01-05 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-17419:
--
Status: Patch Available  (was: In Progress)

> Use isEmpty instead of size() == 0 in hbase-server
> --
>
> Key: HBASE-17419
> URL: https://issues.apache.org/jira/browse/HBASE-17419
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Attachments: HBASE-17419.master.001.patch
>
>
> Use {code:java}.isEmpty(){code} instead of {code:java}size() == 0{code} when 
> possible in the *hbase-server* module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17419) Use isEmpty instead of size() == 0 in hbase-server

2017-01-05 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-17419:
--
Attachment: HBASE-17419.master.001.patch

> Use isEmpty instead of size() == 0 in hbase-server
> --
>
> Key: HBASE-17419
> URL: https://issues.apache.org/jira/browse/HBASE-17419
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Priority: Minor
> Attachments: HBASE-17419.master.001.patch
>
>
> Use {code:java}.isEmpty(){code} instead of {code:java}size() == 0{code} when 
> possible in the *hbase-server* module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-17419) Use isEmpty instead of size() == 0 in hbase-server

2017-01-05 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel reassigned HBASE-17419:
-

Assignee: Jan Hentschel

> Use isEmpty instead of size() == 0 in hbase-server
> --
>
> Key: HBASE-17419
> URL: https://issues.apache.org/jira/browse/HBASE-17419
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jan Hentschel
>Assignee: Jan Hentschel
>Priority: Minor
> Attachments: HBASE-17419.master.001.patch
>
>
> Use {code:java}.isEmpty(){code} instead of {code:java}size() == 0{code} when 
> possible in the *hbase-server* module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801226#comment-15801226
 ] 

Ashish Singhi edited comment on HBASE-17290 at 1/5/17 12:25 PM:


Sorry for the delay, got stuck with company work.
I have attached the patch.

Added a new RS observer, ReplicationObserver to solve this bug.
Please review.


was (Author: ashish singhi):
Sorry, got stuck with company work.
I have attached the patch.

Added a new RS observer, ReplicationObserver to solve this bug.
Please review.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi reassigned HBASE-17290:
-

Assignee: Ashish Singhi

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17290) Potential loss of data for replication of bulk loaded hfiles

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801226#comment-15801226
 ] 

Ashish Singhi commented on HBASE-17290:
---

Sorry, got stuck with company work.
I have attached the patch.

Added a new RS observer, ReplicationObserver to solve this bug.
Please review.

> Potential loss of data for replication of bulk loaded hfiles
> 
>
> Key: HBASE-17290
> URL: https://issues.apache.org/jira/browse/HBASE-17290
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-17290.patch
>
>
> Currently the support for replication of bulk loaded hfiles relies on bulk 
> load marker written in the WAL.
> The move of bulk loaded hfile(s) (into region directory) may succeed but the 
> write of bulk load marker may fail.
> This means that although bulk loaded hfile is being served in source cluster, 
> the replication wouldn't happen.
> Normally operator is supposed to retry the bulk load. But relying on human 
> retry is not robust solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >