[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-16 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Attachment: HBASE-16210.master.8.1.patch

> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, 
> HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, 
> HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, 
> HBASE-16210.master.7.patch, HBASE-16210.master.8.1.patch, 
> HBASE-16210.master.8.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this patch/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Check HBASE-16210.master.6.patch.
> Suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-16 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Status: Patch Available  (was: Open)

> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, 
> HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, 
> HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, 
> HBASE-16210.master.7.patch, HBASE-16210.master.8.1.patch, 
> HBASE-16210.master.8.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this patch/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Check HBASE-16210.master.6.patch.
> Suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-16 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Status: Open  (was: Patch Available)

> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch, HBASE-16210.master.2.patch, 
> HBASE-16210.master.3.patch, HBASE-16210.master.4.patch, 
> HBASE-16210.master.5.patch, HBASE-16210.master.6.patch, 
> HBASE-16210.master.7.patch, HBASE-16210.master.8.1.patch, 
> HBASE-16210.master.8.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this patch/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Check HBASE-16210.master.6.patch.
> Suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14881) Provide a Put API that uses the provided row without coping

2016-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381102#comment-15381102
 ] 

Hadoop QA commented on HBASE-14881:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 1s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 43s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818430/HBASE-14881.master.001.patch
 |
| JIRA Issue | HBASE-14881 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9bc7ecf |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 |
| findbugs | v3.0.0 |
|  Test Results | 

[jira] [Commented] (HBASE-16239) Better logging for RPC related exceptions

2016-07-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381083#comment-15381083
 ] 

Jerry He commented on HBASE-16239:
--

HBASE-16149 was trying to do something similar.  It is in branch-1 and master.  
But you can do more on top of it.

> Better logging for RPC related exceptions
> -
>
> Key: HBASE-16239
> URL: https://issues.apache.org/jira/browse/HBASE-16239
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-16239_v1.patch
>
>
> On many occasions, we have to debug RPC related issues, but it is hard in AP 
> + RetryingRpcCaller since we mask the stack traces until all retries have 
> been exhausted (which takes 10 minutes by default).
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14881) Provide a Put API that uses the provided row without coping

2016-07-16 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-14881:
-
Attachment: HBASE-14881.master.001.patch

Upload patch 001 with some comments corrected/improved, no logic change over v0.

> Provide a Put API that uses the provided row without coping
> ---
>
> Key: HBASE-14881
> URL: https://issues.apache.org/jira/browse/HBASE-14881
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Xiang Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14881-master-v0.patch, HBASE-14881.master.001.patch
>
>
> The current available Put API always makes a copy of the rowkey.
> Let's provide an API that accepts an immutable byte array as rowkey without 
> making a copy.
> There are cases where the caller of Put has created the immutable byte array 
> (e.g from a serializer) and will not change it for the Put duration. We can 
> avoid making a copy again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14881) Provide a Put API that uses the provided row without coping

2016-07-16 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-14881:
-
Fix Version/s: 2.0.0
Affects Version/s: 1.2.0
   Status: Patch Available  (was: Open)

> Provide a Put API that uses the provided row without coping
> ---
>
> Key: HBASE-14881
> URL: https://issues.apache.org/jira/browse/HBASE-14881
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Jerry He
>Assignee: Xiang Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14881-master-v0.patch, HBASE-14881.master.001.patch
>
>
> The current available Put API always makes a copy of the rowkey.
> Let's provide an API that accepts an immutable byte array as rowkey without 
> making a copy.
> There are cases where the caller of Put has created the immutable byte array 
> (e.g from a serializer) and will not change it for the Put duration. We can 
> avoid making a copy again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2016-07-16 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381075#comment-15381075
 ] 

Ben Manes commented on HBASE-15560:
---

Can we merge this in?

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Reporter: Ben Manes
> Attachments: tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14881) Provide a Put API that uses the provided row without coping

2016-07-16 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381072#comment-15381072
 ] 

Xiang Li commented on HBASE-14881:
--

I think this enhancement does not apply to {{Put(ByteBuffer row)}} or 
{{Put(ByteBuffer row, long ts)}} either,because this.row takes the remaining 
part of the byte array which backs the ByteBuffer, right?

> Provide a Put API that uses the provided row without coping
> ---
>
> Key: HBASE-14881
> URL: https://issues.apache.org/jira/browse/HBASE-14881
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Xiang Li
> Attachments: HBASE-14881-master-v0.patch
>
>
> The current available Put API always makes a copy of the rowkey.
> Let's provide an API that accepts an immutable byte array as rowkey without 
> making a copy.
> There are cases where the caller of Put has created the immutable byte array 
> (e.g from a serializer) and will not change it for the Put duration. We can 
> avoid making a copy again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381069#comment-15381069
 ] 

Hudson commented on HBASE-16235:


FAILURE: Integrated in HBase-Trunk_matrix #1242 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1242/])
HBASE-16235 Addendum uses hfile count of 20 (ChiaPing) (tedyu: rev 
9bc7ecfb9dec6bfe14a12b6d3bfd11392d7752b8)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java


> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16235.addendum, HBASE-16235-v1.patch, 
> HBASE-16235-v2.patch, HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381041#comment-15381041
 ] 

Hadoop QA commented on HBASE-16172:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 7s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 7s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818428/16172.v4.txt |
| JIRA Issue | HBASE-16172 |
| Optional Tests |  

[jira] [Commented] (HBASE-16076) Cannot configure split policy in HBase shell

2016-07-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381038#comment-15381038
 ] 

Enis Soztutar commented on HBASE-16076:
---

+1. We recently ran into this and we were able to use the documented syntax. 

> Cannot configure split policy in HBase shell
> 
>
> Key: HBASE-16076
> URL: https://issues.apache.org/jira/browse/HBASE-16076
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Youngjoon Kim
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16076.patch, HBASE-16076_v1.patch
>
>
> The reference guide explains how to configure split policy in HBase 
> shell([link|http://hbase.apache.org/book.html#_custom_split_policies]).
> {noformat}
> Configuring the Split Policy On a Table Using HBase Shell
> hbase> create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}},
> {NAME => 'cf1'}
> {noformat}
> But if run that command, shell complains 'An argument ignored (unknown or 
> overridden): CONFIG', and the table description has no split policy.
> {noformat}
> hbase(main):067:0* create 'test', {METHOD => 'table_att', CONFIG => 
> {'SPLIT_POLICY' => 
> 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy'}}, {NAME 
> => 'cf1'}
> An argument ignored (unknown or overridden): CONFIG
> Created table test
> Took 1.2180 seconds
> hbase(main):068:0> describe 'test'
> Table test is ENABLED
> test
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', IN_MEMORY_COMPACTION => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => '
> false', BLOCKCACHE => 'true'}
> 1 row(s)
> Took 0.0200 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15381035#comment-15381035
 ] 

Hudson commented on HBASE-16235:


FAILURE: Integrated in HBase-1.4 #289 (See 
[https://builds.apache.org/job/HBase-1.4/289/])
HBASE-16235 TestSnapshotFromMaster#testSnapshotHFileArchiving will fail (tedyu: 
rev 40204eb0c1eb8c83700f8d27d9ac63c518c71a60)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java


> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16235.addendum, HBASE-16235-v1.patch, 
> HBASE-16235-v2.patch, HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16172) Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas

2016-07-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16172:
---
Attachment: 16172.v4.txt

> Unify the retry logic in ScannerCallableWithReplicas and 
> RpcRetryingCallerWithReadReplicas
> --
>
> Key: HBASE-16172
> URL: https://issues.apache.org/jira/browse/HBASE-16172
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Ted Yu
> Attachments: 16172.branch-1.v4.txt, 16172.v1.txt, 16172.v2.txt, 
> 16172.v2.txt, 16172.v3.txt, 16172.v4.txt, 16172.v4.txt, 16172.v4.txt
>
>
> The issue is pointed out by [~devaraj] in HBASE-16132 (Thanks D.D.), that in 
> {{RpcRetryingCallerWithReadReplicas#call}} we will call 
> {{ResultBoundedCompletionService#take}} instead of {{poll}} to dead-wait on 
> the second one if the first replica timed out, while in 
> {{ScannerCallableWithReplicas#call}} we still use 
> {{ResultBoundedCompletionService#poll}} with some timeout for the 2nd replica.
> This JIRA aims at discussing whether to unify the logic in these two kinds of 
> caller with region replica and taking action if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16235:
---
   Resolution: Fixed
Fix Version/s: 1.4.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Chiaping.

> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16235.addendum, HBASE-16235-v1.patch, 
> HBASE-16235-v2.patch, HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16239) Better logging for RPC related exceptions

2016-07-16 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16239:
--
Attachment: hbase-16239_v1.patch

patch against an older version. 

> Better logging for RPC related exceptions
> -
>
> Key: HBASE-16239
> URL: https://issues.apache.org/jira/browse/HBASE-16239
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-16239_v1.patch
>
>
> On many occasions, we have to debug RPC related issues, but it is hard in AP 
> + RetryingRpcCaller since we mask the stack traces until all retries have 
> been exhausted (which takes 10 minutes by default).
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16239) Better logging for RPC related exceptions

2016-07-16 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-16239:
-

 Summary: Better logging for RPC related exceptions
 Key: HBASE-16239
 URL: https://issues.apache.org/jira/browse/HBASE-16239
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar


On many occasions, we have to debug RPC related issues, but it is hard in AP + 
RetryingRpcCaller since we mask the stack traces until all retries have been 
exhausted (which takes 10 minutes by default).

 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380875#comment-15380875
 ] 

Hadoop QA commented on HBASE-16235:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.1/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 4s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 17s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 133m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestMasterReplication |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818405/16235.addendum |
| JIRA Issue | HBASE-16235 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-16117) Fix Connection leak in mapred.TableOutputFormat

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-16117:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Fix Connection leak in mapred.TableOutputFormat 
> 
>
> Key: HBASE-16117
> URL: https://issues.apache.org/jira/browse/HBASE-16117
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 2.0.0, 1.3.0, 1.2.2, 1.1.6
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 2.0.0, 1.3.1, 1.2.3, 1.1.7
>
> Attachments: HBASE-16117.branch-1.001.patch, 
> hbase-16117.branch-1.patch, hbase-16117.patch, hbase-16117.v2.branch-1.patch, 
> hbase-16117.v2.patch, hbase-16117.v3.branch-1.patch, hbase-16117.v3.patch, 
> hbase-16117.v4.patch
>
>
> Spark seems to instantiate multiple instances of output formats within a 
> single process.  When mapred.TableOutputFormat (not 
> mapreduce.TableOutputFormat) is used, this may cause connection leaks that 
> slowly exhaust the cluster's zk connections.  
> This patch fixes that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15984) Given failure to parse a given WAL that was closed cleanly, replay the WAL.

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15984:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Given failure to parse a given WAL that was closed cleanly, replay the WAL.
> ---
>
> Key: HBASE-15984
> URL: https://issues.apache.org/jira/browse/HBASE-15984
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.3.1, 0.98.21, 1.2.3, 1.1.7
>
> Attachments: HBASE-15984.1.patch
>
>
> subtask for a general work around for "underlying reader failed / is in a bad 
> state" just for the case where a WAL 1) was closed cleanly and 2) we can tell 
> that our current offset ought not be the end of parseable entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13587) TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent is flakey

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13587:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent is flakey
> ---
>
> Key: HBASE-13587
> URL: https://issues.apache.org/jira/browse/HBASE-13587
> Project: HBase
>  Issue Type: Test
>Reporter: Nick Dimiduk
> Fix For: 2.0.0, 1.3.1, 1.2.3, 1.1.7
>
>
> Looking at our [build 
> history|https://builds.apache.org/job/HBase-1.1/buildTimeTrend], it seems 
> this test is flakey. See builds 428, 431, 432, 433.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13603) Write test asserting desired priority of RS->Master RPCs

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13603:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Write test asserting desired priority of RS->Master RPCs
> 
>
> Key: HBASE-13603
> URL: https://issues.apache.org/jira/browse/HBASE-13603
> Project: HBase
>  Issue Type: Test
>  Components: IPC/RPC, test
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.3.1, 1.2.3, 1.1.7
>
>
> From HBASE-13351:
> {quote}
> Any way we can write a FT test to assert that the RS->Master APIs are treated 
> with higher priority. I see your UT for asserting the annotation.
> {quote}
> Write a test that verifies expected RPCs are run on the correct pools in as 
> real of an environment possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14391) Empty regionserver WAL will never be deleted although the coresponding regionserver has been stale

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14391:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Empty regionserver WAL will never be deleted although the coresponding 
> regionserver has been stale
> --
>
> Key: HBASE-14391
> URL: https://issues.apache.org/jira/browse/HBASE-14391
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.2
>Reporter: Qianxi Zhang
>Assignee: Qianxi Zhang
> Fix For: 2.0.0, 1.3.1, 1.2.3, 1.1.7
>
> Attachments: HBASE-14391-master-v3.patch, 
> HBASE_14391_master_v4.patch, HBASE_14391_trunk_v1.patch, 
> HBASE_14391_trunk_v2.patch, WALs-leftover-dir.txt
>
>
> When I restarted the hbase cluster in which there was few data, I found there 
> are two directories for one host with different timestamp which indicates 
> that the old regionserver wal directory is not deleted.
> FHLog#989
> {code}
>  @Override
>   public void close() throws IOException {
> shutdown();
> final FileStatus[] files = getFiles();
> if (null != files && 0 != files.length) {
>   for (FileStatus file : files) {
> Path p = getWALArchivePath(this.fullPathArchiveDir, file.getPath());
> // Tell our listeners that a log is going to be archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.preLogArchive(file.getPath(), p);
>   }
> }
> if (!FSUtils.renameAndSetModifyTime(fs, file.getPath(), p)) {
>   throw new IOException("Unable to rename " + file.getPath() + " to " 
> + p);
> }
> // Tell our listeners that a log was archived.
> if (!this.listeners.isEmpty()) {
>   for (WALActionsListener i : this.listeners) {
> i.postLogArchive(file.getPath(), p);
>   }
> }
>   }
>   LOG.debug("Moved " + files.length + " WAL file(s) to " +
> FSUtils.getPath(this.fullPathArchiveDir));
> }
> LOG.info("Closed WAL: " + toString());
>   }
> {code}
> When regionserver is stopped, the hlog will be archived, so wal/regionserver 
> is empty in hdfs.
> MasterFileSystem#252
> {code}
> if (curLogFiles == null || curLogFiles.length == 0) {
> // Empty log folder. No recovery needed
> continue;
>   }
> {code}
> The regionserver directory will be not splitted, it makes sense. But it will 
> be not deleted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11819) Unit test for CoprocessorHConnection

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-11819:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Unit test for CoprocessorHConnection 
> -
>
> Key: HBASE-11819
> URL: https://issues.apache.org/jira/browse/HBASE-11819
> Project: HBase
>  Issue Type: Test
>Reporter: Andrew Purtell
>Assignee: Talat UYARER
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 0.98.14, 1.3.1, 1.2.3, 1.1.7
>
> Attachments: HBASE-11819v4-master.patch, HBASE-11819v5-0.98 
> (1).patch, HBASE-11819v5-0.98.patch, HBASE-11819v5-master (1).patch, 
> HBASE-11819v5-master.patch, HBASE-11819v5-master.patch, 
> HBASE-11819v5-v0.98.patch, HBASE-11819v5-v1.0.patch, 
> HBASE-11819v6-branch-1.patch, HBASE-11819v6-master.patch
>
>
> Add a unit test to hbase-server that exercises CoprocessorHConnection . 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14223) Meta WALs are not cleared if meta region was closed and RS aborts

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14223:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Meta WALs are not cleared if meta region was closed and RS aborts
> -
>
> Key: HBASE-14223
> URL: https://issues.apache.org/jira/browse/HBASE-14223
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.0.4, 1.3.1, 1.2.3, 1.1.7
>
> Attachments: HBASE-14223logs, hbase-14223_v0.patch, 
> hbase-14223_v1-branch-1.patch, hbase-14223_v2-branch-1.patch, 
> hbase-14223_v3-branch-1.patch, hbase-14223_v3-branch-1.patch, 
> hbase-14223_v3-master.patch
>
>
> When an RS opens meta, and later closes it, the WAL(FSHlog) is not closed. 
> The last WAL file just sits there in the RS WAL directory. If RS stops 
> gracefully, the WAL file for meta is deleted. Otherwise if RS aborts, WAL for 
> meta is not cleaned. It is also not split (which is correct) since master 
> determines that the RS no longer hosts meta at the time of RS abort. 
> From a cluster after running ITBLL with CM, I see a lot of {{-splitting}} 
> directories left uncleaned: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs
> Found 31 items
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 01:14 
> /apps/hbase/data/WALs/hregion-58203265
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 07:54 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433489308745-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 09:28 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433494382959-splitting
> drwxr-xr-x   - hbase hadoop  0 2015-06-05 10:01 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-1.openstacklocal,16020,1433498252205-splitting
> ...
> {code}
> The directories contain WALs from meta: 
> {code}
> [root@os-enis-dal-test-jun-4-7 cluster-os]# sudo -u hdfs hadoop fs -ls 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting
> Found 2 items
> -rw-r--r--   3 hbase hadoop 201608 2015-06-05 03:15 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
> -rw-r--r--   3 hbase hadoop  44420 2015-06-05 04:36 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> The RS hosted the meta region for some time: 
> {code}
> 2015-06-05 03:14:28,692 INFO  [PostOpenDeployTasks:1588230740] 
> zookeeper.MetaTableLocator: Setting hbase:meta region location in ZooKeeper 
> as os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285
> ...
> 2015-06-05 03:15:17,302 INFO  
> [RS_CLOSE_META-os-enis-dal-test-jun-4-5:16020-0] regionserver.HRegion: Closed 
> hbase:meta,,1.1588230740
> {code}
> In between, a WAL is created: 
> {code}
> 2015-06-05 03:15:11,707 INFO  
> [RS_OPEN_META-os-enis-dal-test-jun-4-5:16020-0-MetaLogRoller] wal.FSHLog: 
> Rolled WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433470511501.meta
>  with entries=385, filesize=196.88 KB; new WAL 
> /apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285..meta.1433474111645.meta
> {code}
> When CM killed the region server later master did not see these WAL files: 
> {code}
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:46,075 
> INFO  [MASTER_SERVER_OPERATIONS-os-enis-dal-test-jun-4-3:16000-0] 
> master.SplitLogManager: started splitting 2 logs in 
> [hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting]
>  for [os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285]
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:47,300 
> INFO  [main-EventThread] wal.WALSplitter: Archived processed log 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/WALs/os-enis-dal-test-jun-4-5.openstacklocal,16020,1433466904285-splitting/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
>  to 
> hdfs://os-enis-dal-test-jun-4-1.openstacklocal:8020/apps/hbase/data/oldWALs/os-enis-dal-test-jun-4-5.openstacklocal%2C16020%2C1433466904285.default.1433475074436
> ./hbase-hbase-master-os-enis-dal-test-jun-4-3.log:2015-06-05 03:36:50,497 
> INFO  [main-EventThread] 

[jira] [Updated] (HBASE-15033) Backport test-patch.sh and zombie-detector.sh from master to branch-1.1

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15033:
-
Fix Version/s: (was: 1.1.6)
   (was: 1.0.3)
   1.1.7
  Summary: Backport test-patch.sh and zombie-detector.sh from master to 
branch-1.1  (was: Backport test-patch.sh and zombie-detector.sh from master to 
branch-1.0/1.1)

> Backport test-patch.sh and zombie-detector.sh from master to branch-1.1
> ---
>
> Key: HBASE-15033
> URL: https://issues.apache.org/jira/browse/HBASE-15033
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: stack
>Assignee: stack
> Fix For: 1.1.7
>
> Attachments: 15033.patch
>
>
> Backport current test-patch.sh and zombie dettector to branch-1.0+



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14610) IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-14610:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> IntegrationTestRpcClient from HBASE-14535 is failing with Async RPC client
> --
>
> Key: HBASE-14610
> URL: https://issues.apache.org/jira/browse/HBASE-14610
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.0.4, 1.3.1, 1.2.3, 1.1.7
>
> Attachments: output
>
>
> HBASE-14535 introduces an IT to simulate a running cluster with RPC servers 
> and RPC clients doing requests against the servers. 
> It passes with the sync client, but fails with async client. Probably we need 
> to take a look. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15308) Flakey TestSplitWalDataLoss on branch-1.1

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15308:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Flakey TestSplitWalDataLoss on branch-1.1
> -
>
> Key: HBASE-15308
> URL: https://issues.apache.org/jira/browse/HBASE-15308
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Heng Chen
> Fix For: 1.1.7
>
>
> It happens during HBASE-15169 QA test,  see 
> https://builds.apache.org/job/PreCommit-HBASE-Build/628/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt
> https://builds.apache.org/job/PreCommit-HBASE-Build/547/artifact/patchprocess/patch-unit-hbase-server-jdk1.8.0_72.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15635) Mean age of Blocks in cache (seconds) on webUI should be greater than zero

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15635:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Mean age of Blocks in cache (seconds) on webUI should be greater than zero
> --
>
> Key: HBASE-15635
> URL: https://issues.apache.org/jira/browse/HBASE-15635
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.17
>Reporter: Heng Chen
>Assignee: Heng Chen
> Fix For: 2.0.0, 1.4.0, 1.0.5, 1.3.1, 0.98.21, 1.2.3, 1.1.7
>
> Attachments: 7BFFAF68-0807-400C-853F-706B498449E1.png, 
> HBASE-15635.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15580) Tag coprocessor limitedprivate scope to StoreFile.Reader

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15580:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Tag coprocessor limitedprivate scope to StoreFile.Reader
> 
>
> Key: HBASE-15580
> URL: https://issues.apache.org/jira/browse/HBASE-15580
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0, 1.0.4, 0.98.21, 1.2.3, 1.1.7
>
> Attachments: HBASE-15580.patch, HBASE-15580_branch-1.0.patch
>
>
> For phoenix local indexing we need to have custom storefile reader 
> constructor(IndexHalfStoreFileReader) to distinguish from other storefile 
> readers. So wanted to mark StoreFile.Reader scope as 
> InterfaceAudience.LimitedPrivate("Coprocessor")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15168) Zombie stomping branch-1.1 edition

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15168:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Zombie stomping branch-1.1 edition
> --
>
> Key: HBASE-15168
> URL: https://issues.apache.org/jira/browse/HBASE-15168
> Project: HBase
>  Issue Type: Umbrella
>  Components: test
>Affects Versions: 1.1.0
>Reporter: Nick Dimiduk
>Priority: Critical
> Fix For: 1.1.7
>
>
> Let's bring back the work done on HBASE-14420 for branch-1.1, stabilize our 
> [builds|https://builds.apache.org/job/HBase-1.1-JDK7/]. Hang tickets here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15983) Replication improperly discards data from end-of-wal in some cases.

2016-07-16 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-15983:
-
Fix Version/s: (was: 1.1.6)
   1.1.7

> Replication improperly discards data from end-of-wal in some cases.
> ---
>
> Key: HBASE-15983
> URL: https://issues.apache.org/jira/browse/HBASE-15983
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 0.98.0, 1.0.0, 1.1.0, 1.2.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 1.3.1, 0.98.21, 1.2.3, 1.1.7
>
>
> In some particular deployments, the Replication code believes it has
> reached EOF for a WAL prior to successfully parsing all bytes known to
> exist in a cleanly closed file.
> The underlying issue is that several different underlying problems with a WAL 
> reader are all treated as end-of-file by the code in ReplicationSource that 
> decides if a given WAL is completed or needs to be retried.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380830#comment-15380830
 ] 

ChiaPing Tsai commented on HBASE-16235:
---

thanks for the addendum.

> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: 16235.addendum, HBASE-16235-v1.patch, 
> HBASE-16235-v2.patch, HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16235:
---
Attachment: 16235.addendum

Addendum which uses hfile count of 20.

> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: 16235.addendum, HBASE-16235-v1.patch, 
> HBASE-16235-v2.patch, HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380813#comment-15380813
 ] 

Hadoop QA commented on HBASE-16235:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} 
| {color:red} HBASE-16235 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818399/hbase-16235.master.v4.patch
 |
| JIRA Issue | HBASE-16235 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2658/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-16235-v1.patch, HBASE-16235-v2.patch, 
> HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16235:
--
Status: Patch Available  (was: Open)

> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-16235-v1.patch, HBASE-16235-v2.patch, 
> HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16235:
--
Status: Open  (was: Patch Available)

> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-16235-v1.patch, HBASE-16235-v2.patch, 
> HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles

2016-07-16 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-16235:
--
Attachment: hbase-16235.master.v4.patch

For reducing the test time, this patch leaves hfileCount of 20 and gets rid of 
the other two.

> TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too 
> many hfiles
> 
>
> Key: HBASE-16235
> URL: https://issues.apache.org/jira/browse/HBASE-16235
> Project: HBase
>  Issue Type: Bug
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-16235-v1.patch, HBASE-16235-v2.patch, 
> HBASE-16235-v3.patch, hbase-16235.master.v4.patch
>
>
> TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles 
> will be compacted and be moved to “archive folder” after cleaning. But not 
> all hfiles will be compacted if there are large number of hfiles.
> The above may be happened if changing the default config like smaller write 
> buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.
> {code:title=TestSnapshotFromMaster.java|borderStyle=solid}
> // it should also check the hfiles in the normal path 
> (/hbase/data/default/...)
> public void testSnapshotHFileArchiving() throws Exception {
>   //...
>   // get the archived files for the table
> Collection files = getArchivedHFiles(archiveDir, rootDir, fs, 
> TABLE_NAME);
> // and make sure that there is a proper subset
> for (String fileName : snapshotHFiles) {
>   assertTrue("Archived hfiles " + files + " is missing snapshot file:" + 
> fileName,
> files.contains(fileName));
> }
>   //...
> }   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16238) It's useless to catch SESSIONEXPIRED exception and retry in RecoverableZooKeeper

2016-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380656#comment-15380656
 ] 

Hadoop QA commented on HBASE-16238:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 1s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818342/HBASE-16238.patch |
| JIRA Issue | HBASE-16238 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 60847a2 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 

[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380637#comment-15380637
 ] 

Anoop Sam John commented on HBASE-16205:


There is a problem.. Am working on that.

> When Cells are not copied to MSLAB, deep clone it while adding to Memstore
> --
>
> Key: HBASE-16205
> URL: https://issues.apache.org/jira/browse/HBASE-16205
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16205.patch, HBASE-16205_V2.patch
>
>
> This is imp after HBASE-15180 optimization. After that we the cells flowing 
> in write path will be backed by the same byte[] where the RPC read the 
> request into. By default we have MSLAB On and so we have a copy operation 
> while adding Cells to memstore.  This copy might not be there if
> 1. MSLAB is turned OFF
> 2. Cell size is more than a configurable max size. This defaults to 256 KB
> 3. If the operation is Append/Increment. 
> In such cases, we should just clone the Cell into a new byte[] and then add 
> to memstore.  Or else we keep referring to the bigger byte[] chunk for longer 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16238) It's useless to catch SESSIONEXPIRED exception and retry in RecoverableZooKeeper

2016-07-16 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-16238:
---
Attachment: HBASE-16238.patch

> It's useless to catch SESSIONEXPIRED exception and retry in 
> RecoverableZooKeeper
> 
>
> Key: HBASE-16238
> URL: https://issues.apache.org/jira/browse/HBASE-16238
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 1.1.5, 1.2.2, 0.98.20
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 1.1.5, 1.2.2, 0.98.20
>
> Attachments: HBASE-16238.patch
>
>
> After HBASE-5549, SESSIONEXPIRED exception was caught and retried with other 
> zookeeper exceptions like ConnectionLoss. But it is useless to retry when a 
> session expired happens, since the retry will never be successful. Though 
> there is a config called "zookeeper.recovery.retry" to control retry times, 
> in our cases, we set this config to a very big number like "9". When a 
> session expired happens, the regionserver should kill itself, but because of 
> the retrying, threads of regionserver stuck at trying to reconnect to 
> zookeeper, and never properly shut down.
> {code}
> public Stat exists(String path, boolean watch)
>   throws KeeperException, InterruptedException {
> TraceScope traceScope = null;
> try {
>   traceScope = Trace.startSpan("RecoverableZookeeper.exists");
>   RetryCounter retryCounter = retryCounterFactory.create();
>   while (true) {
> try {
>   return checkZk().exists(path, watch);
> } catch (KeeperException e) {
>   switch (e.code()) {
> case CONNECTIONLOSS:
> case SESSIONEXPIRED: //we shouldn't catch this
> case OPERATIONTIMEOUT:
>   retryOrThrow(retryCounter, e, "exists");
>   break;
> default:
>   throw e;
>   }
> }
> retryCounter.sleepUntilNextRetry();
>   }
> } finally {
>   if (traceScope != null) traceScope.close();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16238) It's useless to catch SESSIONEXPIRED exception and retry in RecoverableZooKeeper

2016-07-16 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-16238:
---
Fix Version/s: 1.1.5
   1.2.2
   0.98.20
Affects Version/s: 1.1.5
   1.2.2
   0.98.20
   Status: Patch Available  (was: Open)

> It's useless to catch SESSIONEXPIRED exception and retry in 
> RecoverableZooKeeper
> 
>
> Key: HBASE-16238
> URL: https://issues.apache.org/jira/browse/HBASE-16238
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Affects Versions: 0.98.20, 1.2.2, 1.1.5
>Reporter: Allan Yang
>Priority: Minor
> Fix For: 0.98.20, 1.2.2, 1.1.5
>
>
> After HBASE-5549, SESSIONEXPIRED exception was caught and retried with other 
> zookeeper exceptions like ConnectionLoss. But it is useless to retry when a 
> session expired happens, since the retry will never be successful. Though 
> there is a config called "zookeeper.recovery.retry" to control retry times, 
> in our cases, we set this config to a very big number like "9". When a 
> session expired happens, the regionserver should kill itself, but because of 
> the retrying, threads of regionserver stuck at trying to reconnect to 
> zookeeper, and never properly shut down.
> {code}
> public Stat exists(String path, boolean watch)
>   throws KeeperException, InterruptedException {
> TraceScope traceScope = null;
> try {
>   traceScope = Trace.startSpan("RecoverableZookeeper.exists");
>   RetryCounter retryCounter = retryCounterFactory.create();
>   while (true) {
> try {
>   return checkZk().exists(path, watch);
> } catch (KeeperException e) {
>   switch (e.code()) {
> case CONNECTIONLOSS:
> case SESSIONEXPIRED: //we shouldn't catch this
> case OPERATIONTIMEOUT:
>   retryOrThrow(retryCounter, e, "exists");
>   break;
> default:
>   throw e;
>   }
> }
> retryCounter.sleepUntilNextRetry();
>   }
> } finally {
>   if (traceScope != null) traceScope.close();
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16238) It's useless to catch SESSIONEXPIRED exception and retry in RecoverableZooKeeper

2016-07-16 Thread Allan Yang (JIRA)
Allan Yang created HBASE-16238:
--

 Summary: It's useless to catch SESSIONEXPIRED exception and retry 
in RecoverableZooKeeper
 Key: HBASE-16238
 URL: https://issues.apache.org/jira/browse/HBASE-16238
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Reporter: Allan Yang
Priority: Minor


After HBASE-5549, SESSIONEXPIRED exception was caught and retried with other 
zookeeper exceptions like ConnectionLoss. But it is useless to retry when a 
session expired happens, since the retry will never be successful. Though there 
is a config called "zookeeper.recovery.retry" to control retry times, in our 
cases, we set this config to a very big number like "9". When a session 
expired happens, the regionserver should kill itself, but because of the 
retrying, threads of regionserver stuck at trying to reconnect to zookeeper, 
and never properly shut down.

{code}
public Stat exists(String path, boolean watch)
  throws KeeperException, InterruptedException {
TraceScope traceScope = null;
try {
  traceScope = Trace.startSpan("RecoverableZookeeper.exists");
  RetryCounter retryCounter = retryCounterFactory.create();
  while (true) {
try {
  return checkZk().exists(path, watch);
} catch (KeeperException e) {
  switch (e.code()) {
case CONNECTIONLOSS:
case SESSIONEXPIRED: //we shouldn't catch this
case OPERATIONTIMEOUT:
  retryOrThrow(retryCounter, e, "exists");
  break;

default:
  throw e;
  }
}
retryCounter.sleepUntilNextRetry();
  }
} finally {
  if (traceScope != null) traceScope.close();
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14881) Provide a Put API that uses the provided row without coping

2016-07-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380627#comment-15380627
 ] 

Anoop Sam John commented on HBASE-14881:


Oh ya..  we keep row as a byte[] with out storing any offset and length info in 
Mutation.  Ya make sense ur arg.

> Provide a Put API that uses the provided row without coping
> ---
>
> Key: HBASE-14881
> URL: https://issues.apache.org/jira/browse/HBASE-14881
> Project: HBase
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Xiang Li
> Attachments: HBASE-14881-master-v0.patch
>
>
> The current available Put API always makes a copy of the rowkey.
> Let's provide an API that accepts an immutable byte array as rowkey without 
> making a copy.
> There are cases where the caller of Put has created the immutable byte array 
> (e.g from a serializer) and will not change it for the Put duration. We can 
> avoid making a copy again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-3727) MultiHFileOutputFormat

2016-07-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380592#comment-15380592
 ] 

Hudson commented on HBASE-3727:
---

SUCCESS: Integrated in HBase-Trunk_matrix #1237 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1237/])
HBASE-3727 MultiHFileOutputFormat (yi liang) (jerryjch: rev 
60847a2d76163ff40df94a980e1bd3f837ff9d71)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiHFileOutputFormat.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiHFileOutputFormat.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.java


> MultiHFileOutputFormat
> --
>
> Key: HBASE-3727
> URL: https://issues.apache.org/jira/browse/HBASE-3727
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: yi liang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-3727-V3.patch, HBASE-3727-V4.patch, 
> HBASE-3727-V5.patch, MH2.patch, MultiHFileOutputFormat.java, 
> MultiHFileOutputFormat.java, MultiHFileOutputFormat.java, 
> TestMultiHFileOutputFormat.java
>
>
> Like MultiTableOutputFormat, but outputting HFiles. Key is tablename as an 
> IBW. Creates sub-writers (code cut and pasted from HFileOutputFormat) on 
> demand that produce HFiles in per-table subdirectories of the configured 
> output path. Does not currently support partitioning for existing tables / 
> incremental update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16229) Cleaning up size and heapSize calculation

2016-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380563#comment-15380563
 ] 

Hadoop QA commented on HBASE-16229:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 57s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 44s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 57s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 146m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestMasterReplication |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12818300/HBASE-16229_V2.patch |
| JIRA Issue | HBASE-16229 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT 

[jira] [Commented] (HBASE-16205) When Cells are not copied to MSLAB, deep clone it while adding to Memstore

2016-07-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380534#comment-15380534
 ] 

Hadoop QA commented on HBASE-16205:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 43s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 36s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 144m 9s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestDefaultMemStore |
|   | hadoop.hbase.replication.TestMasterReplication |
|   | hadoop.hbase.regionserver.TestCompactingMemStore |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL |