[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Attachment: (was: HBASE-15035-v4.patch)

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035-v4.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Attachment: HBASE-15035-v4.patch

new v4 without .rej files.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035-v4.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071251#comment-15071251
 ] 

Lars Hofhansl commented on HBASE-14822:
---

Turns out HBase 1.0.x does not have the exception metrics, so I'll commit the 
addendum without the test change there.

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071273#comment-15071273
 ] 

Hudson commented on HBASE-14717:


FAILURE: Integrated in HBase-1.2 #473 (See 
[https://builds.apache.org/job/HBase-1.2/473/])
HBASE-14717 enable_table_replication command should only create (tedyu: rev 
a7889b5f4875895a2402b119dd1e763f90e1b7e1)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdminWithClusters.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java


> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071359#comment-15071359
 ] 

Lars Hofhansl commented on HBASE-14940:
---

TestFromClientSide passes. If anything was wrong this, I assume that would fail.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_addendum_0.98.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071240#comment-15071240
 ] 

stack commented on HBASE-15031:
---

I tried my test against the 1.0 codebase and it hangs stuck on mvcc. Will file 
a separate issue for this.

> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch, 
> 15031.v5.branch-1.0.patch, 15031.v6.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071260#comment-15071260
 ] 

Lars Hofhansl commented on HBASE-14940:
---

Should we replace 
{source}
unaligned = (Boolean) m.invoke(null);
{source}
with
{source}
unaligned = (Boolean) m.invoke(null);
{source}

In 0.98?


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071260#comment-15071260
 ] 

Lars Hofhansl edited comment on HBASE-14940 at 12/24/15 9:35 PM:
-

Should we replace 
{source}
unaligned = (boolean) m.invoke(null);
{source}
with
{source}
unaligned = (Boolean) m.invoke(null);
{source}

In 0.98?



was (Author: lhofhansl):
Should we replace 
{source}
unaligned = (Boolean) m.invoke(null);
{source}
with
{source}
unaligned = (Boolean) m.invoke(null);
{source}

In 0.98?


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071293#comment-15071293
 ] 

Hudson commented on HBASE-14822:


SUCCESS: Integrated in HBase-1.2-IT #366 (See 
[https://builds.apache.org/job/HBase-1.2-IT/366/])
HBASE-14822; addendum - handle callSeq. (larsh: rev 
ae8b3e06121873f958f13d7b95b2212087d6b55c)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071360#comment-15071360
 ] 

Lars Hofhansl commented on HBASE-14940:
---

And looks right in the debugger.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_addendum_0.98.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14940:
--
Release Note:   (was: Pushed to 0.98 only.)

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_addendum_0.98.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-14940.
---
  Resolution: Fixed
Release Note: Pushed to 0.98 only.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_addendum_0.98.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071259#comment-15071259
 ] 

Lars Hofhansl commented on HBASE-14940:
---

Must have to do with compileSource being 1.6 in 0.98, and 1.7+ everywhere else.


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071295#comment-15071295
 ] 

Hudson commented on HBASE-14822:


SUCCESS: Integrated in HBase-1.1-JDK8 #1716 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1716/])
HBASE-14822; addendum - handle callSeq. (larsh: rev 
8ef49b3147fc002a6aebc0d6d904f00743d02531)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071303#comment-15071303
 ] 

Matteo Bertozzi commented on HBASE-15035:
-

+1 on v4

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035-v4.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071358#comment-15071358
 ] 

Lars Hofhansl commented on HBASE-14940:
---

+1 on addendum. I'm going to commit that to 0.98.
(It works for me compiling with JDK 1.8.)


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_addendum_0.98.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071279#comment-15071279
 ] 

Hudson commented on HBASE-14717:


SUCCESS: Integrated in HBase-1.3-IT #405 (See 
[https://builds.apache.org/job/HBase-1.3-IT/405/])
HBASE-14717 enable_table_replication command should only create (tedyu: rev 
afaa7f843ab02600062f86ae5aca2bca50928e00)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdminWithClusters.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java


> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071280#comment-15071280
 ] 

Hudson commented on HBASE-14822:


SUCCESS: Integrated in HBase-1.3-IT #405 (See 
[https://builds.apache.org/job/HBase-1.3-IT/405/])
HBASE-14822; addendum - handle callSeq. (larsh: rev 
31f8d71ffe2feec14fbf74c277439740216f52b4)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071309#comment-15071309
 ] 

Anoop Sam John commented on HBASE-14940:


Yes only at the load time of UnsafeAccess, we do make unsafe ref (if sun.Unsafe 
available) and then check for unalign capability.  This is same way as in java 
nio.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-24 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15031:
--
Attachment: 15031.v6.branch-1.0.patch

Retry patch. Last run failed like this:

Printing hanging tests
Hanging test : org.apache.hadoop.hbase.rest.TestDeleteRow
Printing Failing tests
Failing test : org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence

Running TestDeleteRow locally it seems fine.

The other fails on occasion.

I messed with test-patch.sh. Lets see how next run does.

> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch, 
> 15031.v5.branch-1.0.patch, 15031.v6.branch-1.0.patch, 
> 15031.v6.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071317#comment-15071317
 ] 

stack commented on HBASE-14684:
---

[~chenheng] Sweet.

> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.2_v1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1_v1.patch, 
> HBASE-14684-branch-1_v2.patch, HBASE-14684-branch-1_v3.patch, 
> HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071321#comment-15071321
 ] 

Anoop Sam John commented on HBASE-14938:


bq.int totalNoOfRequests = totalNoOfFiles / maxZnodesPerRequest;
Can we avoid having 2 loop code.. When totalNoOfFiles  Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938-v1.patch, 
> HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15040) New TestRegionIncrement test addd by HBASE-15031 hangs default increment path if 100 concurrent threads (passes if 10)

2015-12-24 Thread stack (JIRA)
stack created HBASE-15040:
-

 Summary: New TestRegionIncrement test addd by HBASE-15031 hangs 
default increment path if 100 concurrent threads (passes if 10)
 Key: HBASE-15040
 URL: https://issues.apache.org/jira/browse/HBASE-15040
 Project: HBase
  Issue Type: Bug
  Components: Increment
Reporter: stack


Check out current Increment path. Looks like it can hang waiting on mvcc. Thats 
what the TestRegionIncrement test added by HBASE-15031 seems to indicate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Attachment: HBASE-15035-v4.patch

Going back to the simpler version closer to v2 which hardcodes setting 
includetags to true.

[~ram_krish], the hfilereaderimpl's HFileContext doesn't set includetags to 
true and results in the test failures.   

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035-v4.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071261#comment-15071261
 ] 

Lars Hofhansl commented on HBASE-14822:
---

Pushed addendum to all branches, I hope this is the last of this :)
(Sorry for the churn on this one)

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071290#comment-15071290
 ] 

Hudson commented on HBASE-14717:


FAILURE: Integrated in HBase-Trunk_matrix #586 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/586/])
HBASE-14717 enable_table_replication command should only create (tedyu: rev 
a1a19d94059dc3750b477ca03f89a77d53224655)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdminWithClusters.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java


> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071291#comment-15071291
 ] 

Hudson commented on HBASE-14822:


FAILURE: Integrated in HBase-Trunk_matrix #586 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/586/])
HBASE-14822; addendum - handle callSeq. (larsh: rev 
dfada43e90a0767518501f6878bf9896bed912ce)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13477) Create metrics on failed requests

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071249#comment-15071249
 ] 

Lars Hofhansl commented on HBASE-13477:
---

So we decided not to have this 1.0.x? (just ran into this a test on HBASE-14822)

> Create metrics on failed requests
> -
>
> Key: HBASE-13477
> URL: https://issues.apache.org/jira/browse/HBASE-13477
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 2.0.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.1.0, 0.98.13
>
> Attachments: HBASE-13477-0.98.patch, HBASE-13477-v1.patch, 
> HBASE-13477-v2.patch, HBASE-13477-v3.patch, HBASE-13477.patch
>
>
> Add a metric on how many requests failed/errored out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071256#comment-15071256
 ] 

Lars Hofhansl commented on HBASE-14940:
---

Looks like this break compilation in 0.98 now:
{code}
[ERROR] 
/home/lars/dev/hbase-0.98/hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java:[67,38]
 error: incompatible types: Object cannot be converted to boolean
{code}

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-24 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen updated HBASE-14684:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.2_v1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1_v1.patch, 
> HBASE-14684-branch-1_v2.patch, HBASE-14684-branch-1_v3.patch, 
> HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-24 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071271#comment-15071271
 ] 

Heng Chen commented on HBASE-14684:
---

Looks good.  Jobs done.  Let me resolve this issue.

> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.2_v1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1_v1.patch, 
> HBASE-14684-branch-1_v2.patch, HBASE-14684-branch-1_v3.patch, 
> HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071267#comment-15071267
 ] 

Hadoop QA commented on HBASE-14938:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779471/HBASE-14938-v1.patch
  against master branch at commit e15c48ed2cf025dd3b0790c55cdc4239cc0fc161.
  ATTACHMENT ID: 12779471

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17022//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17022//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17022//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17022//console

This message is automatically generated.

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938-v1.patch, 
> HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071301#comment-15071301
 ] 

Hudson commented on HBASE-14822:


SUCCESS: Integrated in HBase-1.1-JDK7 #1629 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1629/])
HBASE-14822; addendum - handle callSeq. (larsh: rev 
8ef49b3147fc002a6aebc0d6d904f00743d02531)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-14940:
---
Attachment: HBASE-14940_addendum_0.98.patch

As per Lars suggestion.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_addendum_0.98.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John reopened HBASE-14940:


Sorry for the trouble..  Reopen issue for the 0.98 addendum

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_addendum_0.98.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071323#comment-15071323
 ] 

Hudson commented on HBASE-14822:


FAILURE: Integrated in HBase-1.2 #474 (See 
[https://builds.apache.org/job/HBase-1.2/474/])
HBASE-14822; addendum - handle callSeq. (larsh: rev 
ae8b3e06121873f958f13d7b95b2212087d6b55c)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15039) HMaster and RegionServers should try to refresh token keys from zk when face InvalidToken.

2015-12-24 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HBASE-15039:
---
Attachment: HBASE-15039.001.patch

First patch, please help to review

> HMaster and RegionServers should try to refresh token keys from zk when face 
> InvalidToken.
> --
>
> Key: HBASE-15039
> URL: https://issues.apache.org/jira/browse/HBASE-15039
> Project: HBase
>  Issue Type: Bug
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HBASE-15039.001.patch
>
>
> One of HMaster and RegionServers is token key master, and others are key 
> slaves, key master will write keys to zookeeper and key slaves will read. If 
> any disconnetion between key slaves and zookeeper, then these HMaster or 
> RegionServers may lost new token, client which use token authentication will 
> get InvalidToken exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15039) HMaster and RegionServers should try to refresh token keys from zk when face InvalidToken.

2015-12-24 Thread Yong Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yong Zhang updated HBASE-15039:
---
Status: Patch Available  (was: Open)

> HMaster and RegionServers should try to refresh token keys from zk when face 
> InvalidToken.
> --
>
> Key: HBASE-15039
> URL: https://issues.apache.org/jira/browse/HBASE-15039
> Project: HBase
>  Issue Type: Bug
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Attachments: HBASE-15039.001.patch
>
>
> One of HMaster and RegionServers is token key master, and others are key 
> slaves, key master will write keys to zookeeper and key slaves will read. If 
> any disconnetion between key slaves and zookeeper, then these HMaster or 
> RegionServers may lost new token, client which use token authentication will 
> get InvalidToken exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071298#comment-15071298
 ] 

Hadoop QA commented on HBASE-15035:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779480/HBASE-15035-v4.patch
  against master branch at commit a1a19d94059dc3750b477ca03f89a77d53224655.
  ATTACHMENT ID: 12779480

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17024//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17024//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17024//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17024//console

This message is automatically generated.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035-v4.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071327#comment-15071327
 ] 

Hudson commented on HBASE-14822:


SUCCESS: Integrated in HBase-1.3 #469 (See 
[https://builds.apache.org/job/HBase-1.3/469/])
HBASE-14822; addendum - handle callSeq. (larsh: rev 
31f8d71ffe2feec14fbf74c277439740216f52b4)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestLeaseRenewal.java


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071343#comment-15071343
 ] 

Hadoop QA commented on HBASE-15031:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779486/15031.v6.branch-1.0.patch
  against branch-1.0 branch at commit dfada43e90a0767518501f6878bf9896bed912ce.
  ATTACHMENT ID: 12779486

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 29 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17025//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17025//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17025//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17025//console

This message is automatically generated.

> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch, 
> 15031.v5.branch-1.0.patch, 15031.v6.branch-1.0.patch, 
> 15031.v6.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071275#comment-15071275
 ] 

Hudson commented on HBASE-14717:


SUCCESS: Integrated in HBase-1.3 #468 (See 
[https://builds.apache.org/job/HBase-1.3/468/])
HBASE-14717 enable_table_replication command should only create (tedyu: rev 
afaa7f843ab02600062f86ae5aca2bca50928e00)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdminWithClusters.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java


> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071274#comment-15071274
 ] 

Hadoop QA commented on HBASE-15031:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779475/15031.v6.branch-1.0.patch
  against branch-1.0 branch at commit a1a19d94059dc3750b477ca03f89a77d53224655.
  ATTACHMENT ID: 12779475

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 29 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
 

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17023//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17023//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17023//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17023//console

This message is automatically generated.

> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch, 
> 15031.v5.branch-1.0.patch, 15031.v6.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071311#comment-15071311
 ] 

stack commented on HBASE-14940:
---

Sound good [~lhofhansl] if it works sir

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071310#comment-15071310
 ] 

Anoop Sam John commented on HBASE-15035:


+1

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035-v4.patch, HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071186#comment-15071186
 ] 

Hudson commented on HBASE-15032:


SUCCESS: Integrated in HBase-1.3-IT #404 (See 
[https://builds.apache.org/job/HBase-1.3-IT/404/])
HBASE-15032 hbase shell scan filter string assumes UTF-8 encoding (tedyu: rev 
a6eea24f711106f1f162453df54aebf9ebb6c6dc)
* hbase-shell/src/test/ruby/hbase/table_test.rb
* hbase-shell/src/main/ruby/hbase/table.rb


> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-24 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071187#comment-15071187
 ] 

Ashish Singhi commented on HBASE-14938:
---

Thanks for the review, Ted.
v1 addresses your comment.
Please review.

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938-v1.patch, 
> HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14938:
--
Attachment: HBASE-14938-v1.patch

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938-v1.patch, 
> HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14717:
---
Summary: enable_table_replication command should only create specified 
table for a peer cluster  (was: enable_table_replication command should create 
only specified table for a peer cluster)

> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14717:
---
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.2.0
   2.0.0

> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14717:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the patch, Ashish

> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071202#comment-15071202
 ] 

Hudson commented on HBASE-15032:


FAILURE: Integrated in HBase-Trunk_matrix #585 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/585/])
HBASE-15032 hbase shell scan filter string assumes UTF-8 encoding (tedyu: rev 
e15c48ed2cf025dd3b0790c55cdc4239cc0fc161)
* hbase-shell/src/test/ruby/hbase/table_test.rb
* hbase-shell/src/main/ruby/hbase/table.rb


> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071199#comment-15071199
 ] 

Ted Yu commented on HBASE-14938:


Latest patch looks good, pending QA run result.

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938-v1.patch, 
> HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14822:
--
Attachment: 14822-master-addendum.txt

Same for master.

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-master-addendum.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14717) enable_table_replication command should only create specified table for a peer cluster

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071210#comment-15071210
 ] 

Hudson commented on HBASE-14717:


FAILURE: Integrated in HBase-1.2-IT #365 (See 
[https://builds.apache.org/job/HBase-1.2-IT/365/])
HBASE-14717 enable_table_replication command should only create (tedyu: rev 
a7889b5f4875895a2402b119dd1e763f90e1b7e1)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdminWithClusters.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java


> enable_table_replication command should only create specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-24 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15031:
--
Attachment: 15031.v6.branch-1.0.patch

I had TestRegionIncrement at 10 threads and had bumped it to 100 for messing... 
and this gets the test stuck in mvcc on the slow increment path.

> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch, 
> 15031.v5.branch-1.0.patch, 15031.v6.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071218#comment-15071218
 ] 

Hadoop QA commented on HBASE-15035:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779464/HBASE-15035-v3.patch
  against master branch at commit e15c48ed2cf025dd3b0790c55cdc4239cc0fc161.
  ATTACHMENT ID: 12779464

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 12 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint
  
org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles
  org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17020//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17020//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17020//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17020//console

This message is automatically generated.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071233#comment-15071233
 ] 

Hadoop QA commented on HBASE-15018:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779465/HBASE-15018-v2.patch
  against master branch at commit e15c48ed2cf025dd3b0790c55cdc4239cc0fc161.
  ATTACHMENT ID: 12779465

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17021//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17021//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17021//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17021//console

This message is automatically generated.

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018-v1(1).patch, HBASE-15018-v1.patch, 
> HBASE-15018-v2.patch, HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: 

[jira] [Updated] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-15035:
---
Attachment: HBASE-15035-v3.patch

v3: Fixed unit test and updated based on ram's comments.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-24 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071144#comment-15071144
 ] 

Ashish Singhi commented on HBASE-15018:
---

bq. Is the checkstyle your issue? Let me know.
Yes, due to a unused import. TYSM, Stack.

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018-v1(1).patch, HBASE-15018-v1.patch, 
> HBASE-15018-v2.patch, HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think 

[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071148#comment-15071148
 ] 

Hudson commented on HBASE-15032:


SUCCESS: Integrated in HBase-1.2 #472 (See 
[https://builds.apache.org/job/HBase-1.2/472/])
HBASE-15032 hbase shell scan filter string assumes UTF-8 encoding (tedyu: rev 
ae1fc1d5cdaab06ec921bf3032ee840a0b66)
* hbase-shell/src/test/ruby/hbase/table_test.rb
* hbase-shell/src/main/ruby/hbase/table.rb


> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071147#comment-15071147
 ] 

Samarth Jain commented on HBASE-14822:
--

[~lhofhansl] - to be clear, I did test the patch. Phoenix queries which would 
have failed with lease timeout exceptions are now passing. So functionally your 
patch works. However, the patch ended up causing an inadvertent performance 
regression. Calling renew lease ends up increasing the nextCallSeq too. The 
subsequent OutOfOrderScannerNextException thrown is handled silently (once) by 
the ClientScanner#loadCache code which ends up setting the callable object to 
null.

{code}
if (e instanceof OutOfOrderScannerNextException) {
  if (retryAfterOutOfOrderException) {
retryAfterOutOfOrderException = false;
  } else {
// TODO: Why wrap this in a DNRIOE when it already is a DNRIOE?
throw new DoNotRetryIOException("Failed after retry of " +
  "OutOfOrderScannerNextException: was there a rpc timeout?", e);
  }
}
// Clear region.
this.currentRegion = null;
{code}

I am calling renewLease and scanner.next() using the same ClientScanner in 
different threads. However, I have proper synchronization in place that makes 
sure I am not calling both at the same time. It doesn't seem like a concurrency 
issue as I can reproduce this behavior consistently.

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14717) enable_table_replication command should create only specified table for a peer cluster

2015-12-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14717:
--
Summary: enable_table_replication command should create only specified 
table for a peer cluster  (was: Enable_table_replication should not create 
table in peer cluster if specified few tables added in peer)

> enable_table_replication command should create only specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14717) enable_table_replication command should create only specified table for a peer cluster

2015-12-24 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071155#comment-15071155
 ] 

Ashish Singhi commented on HBASE-14717:
---

Thanks for the review, Ted.

> enable_table_replication command should create only specified table for a 
> peer cluster
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071164#comment-15071164
 ] 

Lars Hofhansl commented on HBASE-14822:
---

Thanks [~samarthjain], that's what I just found. Looking into how to isolate 
this in a test.


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071177#comment-15071177
 ] 

Hudson commented on HBASE-15034:


SUCCESS: Integrated in HBase-1.3 #467 (See 
[https://builds.apache.org/job/HBase-1.3/467/])
HBASE-15034 IntegrationTestDDLMasterFailover does not clean created 
(matteo.bertozzi: rev b59f0240e5a3aeb434d72ffe5d0575810d23dcf3)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestDDLMasterFailover.java


> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-15034-v1.patch, HBASE-15034-v1.patch, 
> HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071178#comment-15071178
 ] 

Hudson commented on HBASE-15032:


SUCCESS: Integrated in HBase-1.3 #467 (See 
[https://builds.apache.org/job/HBase-1.3/467/])
HBASE-15032 hbase shell scan filter string assumes UTF-8 encoding (tedyu: rev 
a6eea24f711106f1f162453df54aebf9ebb6c6dc)
* hbase-shell/src/test/ruby/hbase/table_test.rb
* hbase-shell/src/main/ruby/hbase/table.rb


> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-14822:
--
Attachment: 14822-0.98-addendum.txt

Addendum that fixes this for 0.98.
Uses RS metrics in the test to determine whether OOO exceptions occurred.
Verified that without the addendum the OOO exceptions occurred, and they don't 
after.

Planning to commit this to all branches. [~busbey], where are you with 1.2?

> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-addendum.txt, 14822-0.98-v2.txt, 
> 14822-0.98-v3.txt, 14822-0.98.txt, 14822-v3-0.98.txt, 14822-v4-0.98.txt, 
> 14822-v4.txt, 14822-v5-0.98.txt, 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, 
> HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071126#comment-15071126
 ] 

Lars Hofhansl commented on HBASE-14822:
---

Hmm... I thought you had tested this patch? Seems I cannot win on this one :(
You are using the same ClientScanner object for renewal and scanning I assume?

>From different threads? There might be a concurrency issue. Does this always 
>happen, or just sometimes?


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071128#comment-15071128
 ] 

stack commented on HBASE-14940:
---

It looks like below only called on instantiation of UnsafeAccess... 

{code}
   if (theUnsafe != null) {
64BYTE_ARRAY_BASE_OFFSET = theUnsafe.arrayBaseOffset(byte[].class); 
71BYTE_ARRAY_BASE_OFFSET = theUnsafe.arrayBaseOffset(byte[].class);
72try {
73  // Using java.nio.Bits#unaligned() to check for 
unaligned-access capability
74  Class clazz = Class.forName("java.nio.Bits");
75  Method m = clazz.getDeclaredMethod("unaligned");
76  m.setAccessible(true);
77  unaligned = (boolean) m.invoke(null);
78} catch (Exception e) {
79  unaligned = false;
80}

{code}

i.e. we do the reflection once only?

Patch seems good otherwise.

> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071131#comment-15071131
 ] 

Lars Hofhansl commented on HBASE-14822:
---

Hmm... I see that indeed each call increases the callSeq on the client, but not 
the server.
How on earth is the test passing? It mixes renew call with actual scanner calls 
to test exactly that behavior.


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15035) bulkloading hfiles with tags that require splits do not preserve tags

2015-12-24 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071133#comment-15071133
 ] 

Jonathan Hsieh commented on HBASE-15035:


[~ram_krish], instead of doing a one-off i updated the constructor and made 
which attributes are copied from the hfile and from the family desc/conf more 
obvious.

> bulkloading hfiles with tags that require splits do not preserve tags
> -
>
> Key: HBASE-15035
> URL: https://issues.apache.org/jira/browse/HBASE-15035
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.0, 1.0.0, 2.0.0, 1.1.0, 1.2.0, 1.3.0
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>Priority: Blocker
> Attachments: HBASE-15035-v2.patch, HBASE-15035-v3.patch, 
> HBASE-15035.patch
>
>
> When an hfile is created with cell tags present and it is bulk loaded into 
> hbase the tags will be present when loaded into a single region.  If the bulk 
> load hfile spans multiple regions, bulk load automatically splits the 
> original hfile into a set of split hfiles corresponding to each of the 
> regions that the original covers.  
> Since 0.98, tags are not copied into the newly created split hfiles. (the 
> default for "includeTags" of the HFileContextBuilder [1] is uninitialized 
> which defaults to false).   This means acls, ttls, mob pointers and other tag 
> stored values will not be bulk loaded in.
> [1]  
> https://github.com/apache/hbase/blob/master/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java#L40



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071136#comment-15071136
 ] 

stack commented on HBASE-15018:
---

bq. Sorry for the trouble, stack. Hadoop QA did not report anything and so I 
did not check for the report, my bad.

Its no trouble. Our build is messy but hopefully stabilizing, For a while there 
last week or so we were doing false positives. Not your fault. Thanks for retry.

Is the checkstyle your issue? Let me know.


> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018-v1(1).patch, HBASE-15018-v1.patch, 
> HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> 

[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071135#comment-15071135
 ] 

Hudson commented on HBASE-14940:


FAILURE: Integrated in HBase-1.0 #1127 (See 
[https://builds.apache.org/job/HBase-1.0/1127/])
HBASE-14940 Make our unsafe based ops more safe. (anoopsamjohn: rev 
86d4e2084d354f12d21f993464f88ec286a8a594)
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15031) Fix merge of MVCC and SequenceID performance regression in branch-1.0

2015-12-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071137#comment-15071137
 ] 

stack commented on HBASE-15031:
---

Says overall a +1 but it failed core tests. Let me fix that.

Looking at what failed, it says:

Printing hanging tests
Hanging test : org.apache.hadoop.hbase.regionserver.TestRegionIncrement

My new test. Let me look.

> Fix merge of MVCC and SequenceID performance regression in branch-1.0
> -
>
> Key: HBASE-15031
> URL: https://issues.apache.org/jira/browse/HBASE-15031
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.0.3
>Reporter: stack
>Assignee: stack
> Attachments: 14460.v0.branch-1.0.patch, 15031.v2.branch-1.0.patch, 
> 15031.v3.branch-1.0.patch, 15031.v4.branch-1.0.patch, 
> 15031.v5.branch-1.0.patch
>
>
> Subtask with fix for branch-1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-15018:
--
Attachment: HBASE-15018-v2.patch

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018-v1(1).patch, HBASE-15018-v1.patch, 
> HBASE-15018-v2.patch, HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I think we should have same behavior across both the implementations.



--
This message was sent by 

[jira] [Commented] (HBASE-14822) Renewing leases of scanners doesn't work

2015-12-24 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071162#comment-15071162
 ] 

Lars Hofhansl commented on HBASE-14822:
---

The test passes, because the scan is retried. Meh.
I have an addendum that increases the sequence number also for renewLease.


> Renewing leases of scanners doesn't work
> 
>
> Key: HBASE-14822
> URL: https://issues.apache.org/jira/browse/HBASE-14822
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.14
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 1.3.0, 1.2.1, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: 14822-0.98-v2.txt, 14822-0.98-v3.txt, 14822-0.98.txt, 
> 14822-v3-0.98.txt, 14822-v4-0.98.txt, 14822-v4.txt, 14822-v5-0.98.txt, 
> 14822-v5-1.3.txt, 14822-v5.txt, 14822.txt, HBASE-14822_98_nextseq.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15016) StoreServices facility in Region

2015-12-24 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070733#comment-15070733
 ] 

Eshcar Hillel commented on HBASE-15016:
---

The patch is attached.
Recap: Future memstore optimizations such as memstore compaction, compression, 
and off-heaping, require some interface with services at the region level. For 
this purpose we introduce the StoreServices class. In addition to being the 
interface through which memstore access services, it also maintains additional 
data that is updated by memstores and can be queried by the region.
This patch also refines and extends region-to-store communication. Since this 
is the normal flow of data, there is no need to create a new interface and the 
API of Store is extended (with the 2 methods described in the previous comment).
Finally, this patch refines the region method which decides to invoke a flush. 
The decision is captured in the new invokeFlushIfNeeded() method. The decision 
in based also on data stored  in the StoreServices objects.

> StoreServices facility in Region
> 
>
> Key: HBASE-15016
> URL: https://issues.apache.org/jira/browse/HBASE-15016
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-15016-V01.patch
>
>
> The default implementation of a memstore ensures that between two flushes the 
> memstore size increases monotonically. Supporting new memstores that store 
> data in different formats (specifically, compressed), or that allows to 
> eliminate data redundancies in memory (e.g., via compaction), means that the 
> size of the data stored in memory can decrease even between two flushes. This 
> requires memstores to have access to facilities that manipulate region 
> counters and synchronization.
> This subtasks introduces a new region interface -- StoreServices, through 
> which store components can access these facilities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15016) StoreServices facility in Region

2015-12-24 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-15016:
--
Status: Patch Available  (was: Open)

> StoreServices facility in Region
> 
>
> Key: HBASE-15016
> URL: https://issues.apache.org/jira/browse/HBASE-15016
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-15016-V01.patch
>
>
> The default implementation of a memstore ensures that between two flushes the 
> memstore size increases monotonically. Supporting new memstores that store 
> data in different formats (specifically, compressed), or that allows to 
> eliminate data redundancies in memory (e.g., via compaction), means that the 
> size of the data stored in memory can decrease even between two flushes. This 
> requires memstores to have access to facilities that manipulate region 
> counters and synchronization.
> This subtasks introduces a new region interface -- StoreServices, through 
> which store components can access these facilities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15016) StoreServices facility in Region

2015-12-24 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-15016:
--
Attachment: HBASE-15016-V01.patch

> StoreServices facility in Region
> 
>
> Key: HBASE-15016
> URL: https://issues.apache.org/jira/browse/HBASE-15016
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-15016-V01.patch
>
>
> The default implementation of a memstore ensures that between two flushes the 
> memstore size increases monotonically. Supporting new memstores that store 
> data in different formats (specifically, compressed), or that allows to 
> eliminate data redundancies in memory (e.g., via compaction), means that the 
> size of the data stored in memory can decrease even between two flushes. This 
> requires memstores to have access to facilities that manipulate region 
> counters and synchronization.
> This subtasks introduces a new region interface -- StoreServices, through 
> which store components can access these facilities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070738#comment-15070738
 ] 

Hudson commented on HBASE-14684:


FAILURE: Integrated in HBase-1.2-IT #363 (See 
[https://builds.apache.org/job/HBase-1.2-IT/363/])
HBASE-14684 Try to remove all MiniMapReduceCluster in unit tests (chenheng: rev 
27988208a87aa17ed6bfad2eb476eba07e010a9b)
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestRowCounter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithTTLs.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCopyTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatTestBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSyncTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHRegionPartitioner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCellCounter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
* 
hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithVisibilityLabels.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHashTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java


> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.2_v1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1_v1.patch, 
> HBASE-14684-branch-1_v2.patch, HBASE-14684-branch-1_v3.patch, 
> HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-15032:
-
Attachment: HBASE-15032-v003.patch

Attach v3 patch, added a shell unittest case. 

With patch from BASE-15023, the following unittest passed.

mvn clean test -Dtest=TestShell -Dshell.test=/TableComplexMethodsTest/

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14684) Try to remove all MiniMapReduceCluster in unit tests

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070817#comment-15070817
 ] 

Hudson commented on HBASE-14684:


SUCCESS: Integrated in HBase-1.2 #471 (See 
[https://builds.apache.org/job/HBase-1.2/471/])
HBASE-14684 Try to remove all MiniMapReduceCluster in unit tests (chenheng: rev 
27988208a87aa17ed6bfad2eb476eba07e010a9b)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestRowCounter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHRegionPartitioner.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHashTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatTestBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCopyTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSyncTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithTTLs.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithVisibilityLabels.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCellCounter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
* 
hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java


> Try to remove all MiniMapReduceCluster in unit tests
> 
>
> Key: HBASE-14684
> URL: https://issues.apache.org/jira/browse/HBASE-14684
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Heng Chen
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14684.branch-1.txt, 14684.branch-1.txt, 
> 14684.branch-1.txt, HBASE-14684-branch-1.2.patch, 
> HBASE-14684-branch-1.2_v1.patch, HBASE-14684-branch-1.2_v1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1.patch, 
> HBASE-14684-branch-1.patch, HBASE-14684-branch-1_v1.patch, 
> HBASE-14684-branch-1_v2.patch, HBASE-14684-branch-1_v3.patch, 
> HBASE-14684.patch, HBASE-14684_v1.patch
>
>
> As discussion in dev list,  we will try to do MR job without 
> MiniMapReduceCluster.
> Testcases will run faster and more reliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070747#comment-15070747
 ] 

Hudson commented on HBASE-14940:


FAILURE: Integrated in HBase-Trunk_matrix #583 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/583/])
HBASE-14940 Make our unsafe based ops more safe. (anoopsamjohn: rev 
6fc2596ab37614fe35ccfebda0564e4721bd4b95)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/nio/SingleByteBuff.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-24 Thread Samir Ahmic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samir Ahmic updated HBASE-15034:

Attachment: HBASE-15034-v1.patch

I'm sure that patch is not related with test failures. Retry.  

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15034-v1.patch, HBASE-15034-v1.patch, 
> HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071101#comment-15071101
 ] 

Hudson commented on HBASE-15032:


FAILURE: Integrated in HBase-1.2-IT #364 (See 
[https://builds.apache.org/job/HBase-1.2-IT/364/])
HBASE-15032 hbase shell scan filter string assumes UTF-8 encoding (tedyu: rev 
ae1fc1d5cdaab06ec921bf3032ee840a0b66)
* hbase-shell/src/main/ruby/hbase/table.rb
* hbase-shell/src/test/ruby/hbase/table_test.rb


> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070884#comment-15070884
 ] 

Ted Yu commented on HBASE-14938:


{code}
458 } else {
459   List listOfOps = new 
ArrayList(totalNoOfFiles);
460   for (int i = 0; i < totalNoOfFiles; i++) {
{code}
The else branch should be covered by the new controlXX method(s), right (the 
remainder part) ?
Can you consolidate if and else branches so that readability is better ?

lgtm otherwise

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14919) Infrastructure refactoring

2015-12-24 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-14919:
--
Attachment: HBASE-14919-V04.patch

re-submitting patch; master seems to be stable again.

> Infrastructure refactoring
> --
>
> Key: HBASE-14919
> URL: https://issues.apache.org/jira/browse/HBASE-14919
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-14919-V01.patch, HBASE-14919-V01.patch, 
> HBASE-14919-V02.patch, HBASE-14919-V03.patch, HBASE-14919-V04.patch, 
> HBASE-14919-V04.patch
>
>
> Refactoring the MemStore hierarchy, introducing segment (StoreSegment) as 
> first-class citizen and decoupling memstore scanner from the memstore 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070854#comment-15070854
 ] 

Ted Yu commented on HBASE-15032:


Verified that the new test passes.

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15032:
---
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.2.0
   2.0.0

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-15018:
--
Attachment: HBASE-15018-v1(1).patch

Sorry for the trouble, [~stack].
Hadoop QA did not report anything and so I did not check for the report, my bad.

This time the failure was in TestAcidGuarantees, XML parsing error, should not 
be related to patch.

Let me retry.

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018-v1(1).patch, HBASE-15018-v1.patch, 
> HBASE-15018.patch, HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1213)
> ... 10 more
> {noformat}
> But that isn't case with AsyncRpcClient, we don't wrap and throw 
> CallTimeoutException as it is.
> {noformat}
> 2015-12-21 14:27:33,093 WARN  
> [RS_OPEN_REGION-host-XX:16201-0.replicationSource.host-XX%2C16201%2C1450687255593,1]
>  regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because 
> of a local or network error: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: callId=2, 
> method=ReplicateWALEntry, rpcTimeout=60, param {TODO: class 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$ReplicateWALEntryRequest}
>   at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:257)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:295)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:23707)
>   at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:73)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:387)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:370)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> 

[jira] [Updated] (HBASE-14717) Enable_table_replication should not create table in peer cluster if specified few tables added in peer

2015-12-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14717:
--
Attachment: HBASE-14717(3).patch

Retry again...

> Enable_table_replication should not create table in peer cluster if specified 
> few tables added in peer
> --
>
> Key: HBASE-14717
> URL: https://issues.apache.org/jira/browse/HBASE-14717
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.0.2
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Ashish Singhi
> Attachments: HBASE-14717(1).patch, HBASE-14717(2).patch, 
> HBASE-14717(3).patch, HBASE-14717.patch
>
>
> For a peer only user specified tables should be created but 
> enable_table_replication command is not honouring that.
> eg:
> like peer1 : t1:cf1, t2
> create 't3', 'd'
> enable_table_replication 't3' > should not create t3 in peer1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15034) IntegrationTestDDLMasterFailover does not clean created namespaces

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070911#comment-15070911
 ] 

Hadoop QA commented on HBASE-15034:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779414/HBASE-15034-v1.patch
  against master branch at commit 6fc2596ab37614fe35ccfebda0564e4721bd4b95.
  ATTACHMENT ID: 12779414

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17014//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17014//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17014//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17014//console

This message is automatically generated.

> IntegrationTestDDLMasterFailover does not clean created namespaces 
> ---
>
> Key: HBASE-15034
> URL: https://issues.apache.org/jira/browse/HBASE-15034
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Samir Ahmic
>Assignee: Samir Ahmic
>Priority: Minor
> Attachments: HBASE-15034-v1.patch, HBASE-15034-v1.patch, 
> HBASE-15035.patch
>
>
> I was running this test recently and notice that after every run there are 
> new namespaces created by test and not cleared when test is finished. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15038) ExportSnapshot should support separate configurations for source and destination clusters

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070820#comment-15070820
 ] 

Hadoop QA commented on HBASE-15038:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779383/hbase-15038.patch
  against master branch at commit 6fc2596ab37614fe35ccfebda0564e4721bd4b95.
  ATTACHMENT ID: 12779383

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestFailedAppendAndSync

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17011//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17011//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17011//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17011//console

This message is automatically generated.

> ExportSnapshot should support separate configurations for source and 
> destination clusters
> -
>
> Key: HBASE-15038
> URL: https://issues.apache.org/jira/browse/HBASE-15038
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, snapshots
>Reporter: Gary Helmling
>Assignee: Gary Helmling
> Attachments: hbase-15038.patch
>
>
> Currently ExportSnapshot uses a single Configuration instance for both the 
> source and destination FileSystem instances to use.  It should allow 
> overriding properties for each filesystem connection separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14938) Limit the number of znodes for ZK in bulk loaded hfile replication

2015-12-24 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14938:
--
Attachment: HBASE-14938(1).patch

Reattaching again

> Limit the number of znodes for ZK in bulk loaded hfile replication
> --
>
> Key: HBASE-14938
> URL: https://issues.apache.org/jira/browse/HBASE-14938
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14938(1).patch, HBASE-14938.patch
>
>
> In ZK the maximum allowable size of the data array is 1 MB. Until we have 
> fixed HBASE-10295 we need to handle this.
> Approach to this problem will be discussed in the comments section.
> Note: We have done internal testing with more than 3k nodes in ZK yet to be 
> replicated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14918) In-Memory MemStore Flush and Compaction

2015-12-24 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070913#comment-15070913
 ] 

Eshcar Hillel commented on HBASE-14918:
---

Patches are available for task 1 and task 2.

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Fix For: 0.98.18
>
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 4 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Adding StoreServices facility at the region level to allow memstores 
> update region counters and access region level synchronization mechanism;
> (3) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (4) Memory optimization including compressed format representation and off 
> heap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15027) Refactor the way the CompactedHFileDischarger threads are created

2015-12-24 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070836#comment-15070836
 ] 

ramkrishna.s.vasudevan commented on HBASE-15027:


Ping for reviews!!! This failing test case should be a timing issue. Will look 
into it. But is the overall approach now fine?

> Refactor the way the CompactedHFileDischarger threads are created
> -
>
> Key: HBASE-15027
> URL: https://issues.apache.org/jira/browse/HBASE-15027
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-15027.patch, HBASE-15027_1.patch, 
> HBASE-15027_2.patch, HBASE-15027_3.patch, HBASE-15027_3.patch
>
>
> As per suggestion given over in HBASE-14970, if we need to create a single 
> thread pool service for the CompactionHFileDischarger we need to create an 
> exectuor service in the RegionServer level and create discharger handler 
> threads (Event handlers) and pass the Event to the new Exectuor service that 
> we create  for the compaction hfiles discharger. What should be the default 
> number of threads here?  If a HRS holds 100 of regions - will 10 threads be 
> enough?  This issue will try to resolve this with tests and discussions and 
> suitable patch will be updated in HBASE-14970 for branch-1 once this is 
> committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15018) Inconsistent way of handling TimeoutException in the rpc client implemenations

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070861#comment-15070861
 ] 

Hadoop QA commented on HBASE-15018:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12779400/HBASE-15018-v1.patch
  against master branch at commit 6fc2596ab37614fe35ccfebda0564e4721bd4b95.
  ATTACHMENT ID: 12779400

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
new checkstyle errors. Check build console for list of new errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:red}-1 core tests{color}.  The patch failed these unit tests:
 

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17012//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17012//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17012//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17012//console

This message is automatically generated.

> Inconsistent way of handling TimeoutException in the rpc client implemenations
> --
>
> Key: HBASE-15018
> URL: https://issues.apache.org/jira/browse/HBASE-15018
> Project: HBase
>  Issue Type: Bug
>  Components: Client, IPC/RPC
>Affects Versions: 2.0.0, 1.1.0, 1.2.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3
>
> Attachments: HBASE-15018-v1.patch, HBASE-15018.patch, 
> HBASE-15018.patch
>
>
> If there is any rpc timeout when using RpcClientImpl then we wrap the 
> exception in IOE and throw it,
> {noformat}
> 2015-11-16 04:05:24,935 WARN [main-EventThread.replicationSource,peer2] 
> regionserver.HBaseInterClusterReplicationEndpoint: Can't replicate because of 
> a local or network error:
> java.io.IOException: Call to host-XX:16040 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=510, 
> waitTime=180001, operationTimeout=18 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1271)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.replicateWALEntry(AdminProtos.java:25690)
> at 
> org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil.replicateWALEntry(ReplicationProtbufUtil.java:77)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:322)
> at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint$Replicator.call(HBaseInterClusterReplicationEndpoint.java:308)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: 

[jira] [Commented] (HBASE-15032) hbase shell scan filter string assumes UTF-8 encoding

2015-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070922#comment-15070922
 ] 

Hadoop QA commented on HBASE-15032:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12779415/HBASE-15032-v003.patch
  against master branch at commit 6fc2596ab37614fe35ccfebda0564e4721bd4b95.
  ATTACHMENT ID: 12779415

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}. The applied patch does not generate new 
checkstyle errors.

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 site{color}.  The mvn post-site goal succeeds with this 
patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 zombies{color}. No zombie tests found running at the end of 
the build.

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17015//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17015//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17015//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/17015//console

This message is automatically generated.

> hbase shell scan filter string assumes UTF-8 encoding
> -
>
> Key: HBASE-15032
> URL: https://issues.apache.org/jira/browse/HBASE-15032
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-15032-v001.patch, HBASE-15032-v002.patch, 
> HBASE-15032-v002.patch, HBASE-15032-v003.patch
>
>
> Current hbase shell scan filter string is assumed to be UTF-8 encoding, which 
> makes the following scan not working.
> hbase(main):011:0> scan 't1'
> ROW COLUMN+CELL   
>   
>   
>  r4 column=cf1:q1, 
> timestamp=1450812398741, value=\x82 
> hbase(main):003:0> scan 't1', {FILTER => "SingleColumnValueFilter ('cf1', 
> 'q1', >=, 'binary:\x80', true, true)"}
> ROW COLUMN+CELL   
>   
>   
> 0 row(s) in 0.0130 seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14940) Make our unsafe based ops more safe

2015-12-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070949#comment-15070949
 ] 

Hudson commented on HBASE-14940:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #1150 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1150/])
HBASE-14940 Make our unsafe based ops more safe. (anoopsamjohn: rev 
f39c41ffe56ae6dab80397651eb97fc5de8a9ab3)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java


> Make our unsafe based ops more safe
> ---
>
> Key: HBASE-14940
> URL: https://issues.apache.org/jira/browse/HBASE-14940
> Project: HBase
>  Issue Type: Bug
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 1.2.0, 1.3.0, 1.1.3, 0.98.17, 1.0.4
>
> Attachments: HBASE-14940.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_branch-1.patch, 
> HBASE-14940_branch-1.patch, HBASE-14940_v2.patch
>
>
> Thanks for the nice findings [~ikeda]
> This jira solves 3 issues with Unsafe operations and ByteBufferUtils
> 1. We can do sun unsafe based reads and writes iff unsafe package is 
> available and underlying platform is having unaligned-access capability. But 
> we were missing the second check
> 2. Java NIO is doing a chunk based copy while doing Unsafe copyMemory. The 
> max chunk size is 1 MB. This is done for "A limit is imposed to allow for 
> safepoint polling during a large copy" as mentioned in comments in Bits.java. 
>  We are also going to do same way
> 3. In ByteBufferUtils, when Unsafe is not available and ByteBuffers are off 
> heap, we were doing byte by byte operation (read/copy). We can avoid this and 
> do better way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >