[jira] [Commented] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when needed

2016-05-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299449#comment-15299449
 ] 

Hadoop QA commented on HBASE-15885:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 
56s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 101m 35s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
34s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 126m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806045/HBASE-15885.patch |
| JIRA Issue | HBASE-15885 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
 |
| git revision | master / 39dc192 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
|  Test 

[jira] [Commented] (HBASE-14921) Memory optimizations

2016-05-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299412#comment-15299412
 ] 

Ted Yu commented on HBASE-14921:


Patch v3 no longer applies.

Mind rebasing the patch and put it on reviewboard ?

Thanks

> Memory optimizations
> 
>
> Key: HBASE-14921
> URL: https://issues.apache.org/jira/browse/HBASE-14921
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Anastasia Braginsky
> Attachments: CellBlocksSegmentInMemStore.pdf, 
> CellBlocksSegmentinthecontextofMemStore(1).pdf, HBASE-14921-V01.patch, 
> HBASE-14921-V02.patch, HBASE-14921-V03.patch, 
> IntroductiontoNewFlatandCompactMemStore.pdf
>
>
> Memory optimizations including compressed format representation and offheap 
> allocations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when needed

2016-05-24 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299386#comment-15299386
 ] 

Guanghao Zhang commented on HBASE-15885:


One method which throw exception is getReferencedFileStatus() in 
computeHDFSBlocksDistributionInternal. But it has same code with open(...) 
method. So if  computeHDFSBlocksDistribution throws exception, the code of get 
stauts in open(...) method will throw exception too.

The other method which throw exception is 
FileSystem.getFileBlockLocations(...). Take a look at the code in hadoop 2.4, 
it only throw IllegalArgumentException("Invalid start or len parameter").

> Compute StoreFile HDFS Blocks Distribution when needed
> --
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Attachments: HBASE-15885.patch
>
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when needed

2016-05-24 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299373#comment-15299373
 ] 

Guanghao Zhang commented on HBASE-15885:


Yes, it changes the behavior when get exception in 
computeHDFSBlocksDistribution(fs). As I profile the region close/open process, 
computeHDFSBlocksDistribution take about 10ms. Reducing the hdfs block 
distribution calculation three times may save 3 * 10ms for region not serving 
time.

> Compute StoreFile HDFS Blocks Distribution when needed
> --
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Attachments: HBASE-15885.patch
>
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when needed

2016-05-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299363#comment-15299363
 ] 

Ted Yu commented on HBASE-15885:


Have you measured the savings in time by reducing the hdfs block distribution 
calculation ?

Thanks

> Compute StoreFile HDFS Blocks Distribution when needed
> --
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Attachments: HBASE-15885.patch
>
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when needed

2016-05-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299356#comment-15299356
 ] 

Ted Yu commented on HBASE-15885:


Previously if computeHDFSBlocksDistribution(fs) throws IOException, 
StoreFileInfo#open() would bubble the exception up.
The patch changes this behavior, right ?

> Compute StoreFile HDFS Blocks Distribution when needed
> --
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Attachments: HBASE-15885.patch
>
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when needed

2016-05-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15885:
---
Summary: Compute StoreFile HDFS Blocks Distribution when needed  (was: 
Compute StoreFile HDFS Blocks Distribution when need it)

> Compute StoreFile HDFS Blocks Distribution when needed
> --
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Attachments: HBASE-15885.patch
>
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when need it

2016-05-24 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-15885:
---
Status: Patch Available  (was: Open)

> Compute StoreFile HDFS Blocks Distribution when need it
> ---
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Attachments: HBASE-15885.patch
>
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when need it

2016-05-24 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-15885:
---
Attachment: HBASE-15885.patch

> Compute StoreFile HDFS Blocks Distribution when need it
> ---
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
> Attachments: HBASE-15885.patch
>
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15881) Allow BZIP2 compression

2016-05-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299306#comment-15299306
 ] 

Andrew Purtell edited comment on HBASE-15881 at 5/25/16 1:44 AM:
-

+1

I tried this once with LZMA, see HBASE-2987. We had trouble with LZMA 
specifically because it is a super slow algorithm, so HBASE-2988 was an attempt 
to mitigate that by only doing the final and expensive compression in major 
compaction. The major compaction was assumed to be the final archival step for 
the data. This was a use case where the left portion of the key was a time 
based component.

Bzip2 should compress significantly faster in comparison. Configuring more 
major compaction threads would mitigate throughput issues (at expense of CPU) 
or the 2988 strategy could be employed instead.


was (Author: apurtell):
+1

I tried this once with LZMA, see HBASE-2987. We had trouble with LZMA 
specifically because it is a super slow algorithm, HBASE-2988 was an attempt to 
address it. Bzip2 should compress significantly faster in comparison. Even so 
2988 would mitigate. 

> Allow BZIP2 compression
> ---
>
> Key: HBASE-15881
> URL: https://issues.apache.org/jira/browse/HBASE-15881
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile
>Reporter: Lars Hofhansl
> Attachments: 15881-0.98.txt
>
>
> BZIP2 is a very efficient compressor in terms of compression rate.
> Compression speed is very slow, de-compression is equivalent or faster than 
> GZIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15881) Allow BZIP2 compression

2016-05-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299306#comment-15299306
 ] 

Andrew Purtell commented on HBASE-15881:


+1

I tried this once with LZMA, see HBASE-2987. We had trouble with LZMA 
specifically because it is a super slow algorithm, HBASE-2988 was an attempt to 
address it. Bzip2 should compress significantly faster in comparison. Even so 
2988 would mitigate. 

> Allow BZIP2 compression
> ---
>
> Key: HBASE-15881
> URL: https://issues.apache.org/jira/browse/HBASE-15881
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile
>Reporter: Lars Hofhansl
> Attachments: 15881-0.98.txt
>
>
> BZIP2 is a very efficient compressor in terms of compression rate.
> Compression speed is very slow, de-compression is equivalent or faster than 
> GZIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15691) Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1

2016-05-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299284#comment-15299284
 ] 

Andrew Purtell commented on HBASE-15691:


This is not a release blocker IMHO. We already have 1.2 out without it.

> Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to 
> branch-1
> -
>
> Key: HBASE-15691
> URL: https://issues.apache.org/jira/browse/HBASE-15691
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0, 1.2.2
>
> Attachments: HBASE-15691-branch-1.patch
>
>
> HBASE-10205 was committed to trunk and 0.98 branches only. To preserve 
> continuity we should commit it to branch-1. The change requires more than 
> nontrivial fixups so I will attach a backport of the change from trunk to 
> current branch-1 here. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15884) NPE in StoreFileScanner#skipKVsNewerThanReadpoint during reverse scan

2016-05-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15884:
---
Hadoop Flags: Reviewed

> NPE in StoreFileScanner#skipKVsNewerThanReadpoint during reverse scan
> -
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-15884-1.patch
>
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15884) NPE in StoreFileScanner#skipKVsNewerThanReadpoint during reverse scan

2016-05-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15884:
---
Summary: NPE in StoreFileScanner#skipKVsNewerThanReadpoint during reverse 
scan  (was: NPE in StoreFileScanner during reverse scan)

> NPE in StoreFileScanner#skipKVsNewerThanReadpoint during reverse scan
> -
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-15884-1.patch
>
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15835) HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" RuntimeException when a local instance of HBase is running

2016-05-24 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299259#comment-15299259
 ] 

Daniel Vimont commented on HBASE-15835:
---

Okay, the above-listed QA-failure prompted me to read in more detail those 
emails regarding the problem of "flaky|flakey" tests. I see that 
TestRegionServerMetrics is indeed on the list, and also that testing of 
HBASE-15876 ran into exactly the same (apparently "flaky|flakey") failure.

In this case, an awkward aspect is that the changes I made could conceivably 
have caused this kind of a failure!! So, just for good measure, I did a fresh 
clone/install of the Master branch and ran the same test: yep, it failed in 
exactly the same way.

> HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" 
> RuntimeException when a local instance of HBase is running
> --
>
> Key: HBASE-15835
> URL: https://issues.apache.org/jira/browse/HBASE-15835
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: easyfix
> Fix For: 2.0.0
>
> Attachments: HBASE-15835-v1.patch, HBASE-15835-v2.patch, 
> HBASE-15835-v3.patch
>
>
> When a MiniCluster is being started with the 
> {{HBaseTestUtility#startMiniCluster}} method (most typically in the context 
> of JUnit testing), if a local HBase instance is already running (or for that 
> matter, another thread with another MiniCluster is already running), the 
> startup will fail with a RuntimeException saying "HMasterAddress already in 
> use", referring explicitly to contention for the same default master info 
> port (16010).
> This problem most recently came up in conjunction with HBASE-14876 and its 
> sub-JIRAs (development of new HBase-oriented Maven archetypes), but this is 
> apparently a known issue to veteran developers, who tend to set up the 
> @BeforeClass sections of their test modules with code similar to the 
> following:
> {code}
> UTIL = HBaseTestingUtility.createLocalHTU();
> // disable UI's on test cluster.
> UTIL.getConfiguration().setInt("hbase.master.info.port", -1);
> UTIL.getConfiguration().setInt("hbase.regionserver.info.port", -1);
> UTIL.startMiniCluster();
> {code}
> A comprehensive solution modeled on this should be put directly into 
> HBaseTestUtility's main constructor, using one of the following options:
> OPTION 1 (always force random port assignment):
> {code}
> this.getConfiguration().setInt(HConstants.MASTER_INFO_PORT, -1);
> this.getConfiguration().setInt(HConstants.REGIONSERVER_PORT, -1);
> {code}
> OPTION 2 (always force random port assignment if user has not explicitly 
> defined alternate port):
> {code}
> Configuration conf = this.getConfiguration();
> if (conf.getInt(HConstants.MASTER_INFO_PORT, 
> HConstants.DEFAULT_MASTER_INFOPORT)
> == HConstants.DEFAULT_MASTER_INFOPORT) {
>   conf.setInt(HConstants.MASTER_INFO_PORT, -1);
> }
> if (conf.getInt(HConstants.REGIONSERVER_PORT, 
> HConstants.DEFAULT_REGIONSERVER_PORT)
> == HConstants.DEFAULT_REGIONSERVER_PORT) {
>   conf.setInt(HConstants.REGIONSERVER_PORT, -1);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299124#comment-15299124
 ] 

Sergey Soldatov commented on HBASE-15884:
-

Hardly, it happens sometimes when there are a scan active and  flush happen. 
Even in stress test (2 threads continuously generate large puts, small memstore 
and 2 reverse scans are running with full scan in a loop) it fails 
unpredictably after a while. The fix is obvious and doesn't affect the logic. 

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-15884-1.patch
>
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15887) Report Log Additions and Removals in Builds

2016-05-24 Thread Clay B. (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clay B. updated HBASE-15887:

Attachment: HBASE-15887-v1.txt

This patch provides an {{hbaselogs}} test which provides the following vote 
table additions (it will never -1 only 0 or +1):
{code}
|  +1  |  hbaselogs  |  0m 0s | Patch changed +2/-0 debug log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 error log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 fatal log entries 
|  +1  |  hbaselogs  |  0m 0s | Patch changed +2/-2 info log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 trace log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 warn log entries 
{code}

And in the general output the following is provided:
{code}


 Checking for changed log entries




Patch changed  debug  info  log entries
{code}

This was tested via*:
{{${YETUS_HOME}/precommit/test-patch.sh  --plugins=all,hbaselogs,-hadoopcheck 
--personality=dev-support/hbase-personality.sh HBASE-15391}}

* I needed to change line 353:
{code}
verify_needed_test hbaselogs
if [[ $? == 0 ]]; then
{code}
To:
{{  if [[ $? == 1 ]]; then}}
In order for the {{hbaselogs}} check to run. I suspect I am missing how to tell 
Yetus to run things properly but this inequality seemingly set to not run a 
test if it is verified needed is how the other tests are implemented.

Lastly, I do not print out the log lines as I am doing a rather crude {{grep}} 
for {{LOG.}} entries right now which look pretty gnarly. However, I 
would like to see this go to Yetus and use something like [Eclipse's 
AST|http://www.eclipse.org/articles/article.php?file=Article-JavaCodeManipulation_AST/index.html]
 support to properly find log entry parameters and calls.

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but Matthew Byng-Maddick asked if we could modify the 
> personality for reporting log additions and removals yesterday at an [HBase 
> meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as Allen 
> Wittenauer presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15887) Report Log Additions and Removals in Builds

2016-05-24 Thread Clay B. (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15299121#comment-15299121
 ] 

Clay B. edited comment on HBASE-15887 at 5/24/16 11:08 PM:
---

This patch provides an {{hbaselogs}} test which provides the following vote 
table additions (it will never -1 only 0 or +1):
{code}
|  +1  |  hbaselogs  |  0m 0s | Patch changed +2/-0 debug log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 error log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 fatal log entries 
|  +1  |  hbaselogs  |  0m 0s | Patch changed +2/-2 info log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 trace log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 warn log entries 
{code}

And in the general output the following is provided:
{code}


 Checking for changed log entries




Patch changed  debug  info  log entries
{code}

This was tested via*:
{code}
${YETUS_HOME}/precommit/test-patch.sh  --plugins=all,hbaselogs,-hadoopcheck 
--personality=dev-support/hbase-personality.sh HBASE-15391
{code}

* I needed to change line 353:
{code}
verify_needed_test hbaselogs
if [[ $? == 0 ]]; then
{code}
To:
{{if [[ $? == 1 ]]; then}}
In order for the {{hbaselogs}} check to run. I suspect I am missing how to tell 
Yetus to run things properly but this inequality seemingly set to not run a 
test if it is verified needed is how the other tests are implemented.

Lastly, I do not print out the log lines as I am doing a rather crude {{grep}} 
for {{LOG.}} entries right now which look pretty gnarly. However, I 
would like to see this go to Yetus and use something like [Eclipse's 
AST|http://www.eclipse.org/articles/article.php?file=Article-JavaCodeManipulation_AST/index.html]
 support to properly find log entry parameters and calls.


was (Author: clayb):
This patch provides an {{hbaselogs}} test which provides the following vote 
table additions (it will never -1 only 0 or +1):
{code}
|  +1  |  hbaselogs  |  0m 0s | Patch changed +2/-0 debug log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 error log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 fatal log entries 
|  +1  |  hbaselogs  |  0m 0s | Patch changed +2/-2 info log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 trace log entries 
|   0  |  hbaselogs  |  0m 0s | Patch changed 0 warn log entries 
{code}

And in the general output the following is provided:
{code}


 Checking for changed log entries




Patch changed  debug  info  log entries
{code}

This was tested via*:
{{${YETUS_HOME}/precommit/test-patch.sh  --plugins=all,hbaselogs,-hadoopcheck 
--personality=dev-support/hbase-personality.sh HBASE-15391}}

* I needed to change line 353:
{code}
verify_needed_test hbaselogs
if [[ $? == 0 ]]; then
{code}
To:
{{  if [[ $? == 1 ]]; then}}
In order for the {{hbaselogs}} check to run. I suspect I am missing how to tell 
Yetus to run things properly but this inequality seemingly set to not run a 
test if it is verified needed is how the other tests are implemented.

Lastly, I do not print out the log lines as I am doing a rather crude {{grep}} 
for {{LOG.}} entries right now which look pretty gnarly. However, I 
would like to see this go to Yetus and use something like [Eclipse's 
AST|http://www.eclipse.org/articles/article.php?file=Article-JavaCodeManipulation_AST/index.html]
 support to properly find log entry parameters and calls.

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but Matthew Byng-Maddick asked if we could modify the 
> personality for reporting log additions and removals yesterday at an [HBase 
> meetup at Splice 
> 

[jira] [Created] (HBASE-15887) Report Log Additions and Removals in Builds

2016-05-24 Thread Clay B. (JIRA)
Clay B. created HBASE-15887:
---

 Summary: Report Log Additions and Removals in Builds
 Key: HBASE-15887
 URL: https://issues.apache.org/jira/browse/HBASE-15887
 Project: HBase
  Issue Type: New Feature
  Components: build
Reporter: Clay B.
Priority: Trivial


It would be very nice for the Apache Yetus verifications of HBase patches to 
report log item addition and deletions.

This is not my idea but Matthew Byng-Maddick asked if we could modify the 
personality for reporting log additions and removals yesterday at an [HBase 
meetup at Splice 
machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as Allen 
Wittenauer presented Apache Yetus for building HBase.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298999#comment-15298999
 ] 

Ted Yu commented on HBASE-15884:


Is it possible to add a test so that we prevent regression in the future ?

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-15884-1.patch
>
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15886) Master shutsdown after ~3 min when using Native libraries

2016-05-24 Thread Justin Paschall (JIRA)
Justin Paschall created HBASE-15886:
---

 Summary: Master shutsdown after ~3 min when using Native libraries
 Key: HBASE-15886
 URL: https://issues.apache.org/jira/browse/HBASE-15886
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 2.0.0
 Environment: Linux (Fedora).  Hbase stand alone mode, as well as 
pseudo-distributed
Reporter: Justin Paschall
 Fix For: 2.0.0


Hbase master appears to shutdown after ~3 minutes when I set env variables tp 
include the paths to Native libraries.   For the first 3 minutes operations are 
working correctly, I can create and write to tables.   This only occurs when I 
set HADOOP_HOME and LD_LIBRARY PATH - which I do for purpose of enabling SNAPPY 
compression. I do not observe this problem problem when I run without 
setting those native library paths.

Notably, the system is running properly at first for the first ~3 minutes, I 
can create and write to tables, until a crash invariably occurs.

I'm using Master from github  as of commit:   084b036
Running in stand-alone non-distributed mode, but also see this in 
psuedodistributed.

In hbase-master log I am seeing:
2016-05-24 13:09:01,550 WARN  [ProcedureExecutor-1] balancer.BaseLoadBalancer: 
Wanted to do round robin assignment but no servers to assign to
2016-05-24 13:09:01,550 INFO  [ProcedureExecutor-1] 
procedure.ServerCrashProcedure: Caught java.io.IOException: Unable to determine 
a plan to assign region(s) during region assignment, will retry






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when need it

2016-05-24 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298259#comment-15298259
 ] 

Guanghao Zhang commented on HBASE-15885:


We have a patch for our 0.98 branch. I will attach it tomorrow.

> Compute StoreFile HDFS Blocks Distribution when need it
> ---
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when need it

2016-05-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298227#comment-15298227
 ] 

Ted Yu commented on HBASE-15885:


Interesting.

Do you want to present a patch ?

> Compute StoreFile HDFS Blocks Distribution when need it
> ---
>
> Key: HBASE-15885
> URL: https://issues.apache.org/jira/browse/HBASE-15885
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>
> Now when open a StoreFileReader, it always need to compute HDFS blocks 
> distribution. But when balance a region, it will increase the region not 
> serving time. Because it need first close region on rs A, then open it on rs 
> B. When close region, it first preFlush, then flush the new update to a new 
> store file. The new store file will first be flushed to tmp directory, then 
> move it to column family directory. These need open StoreFileReader twice 
> which means it need compute HDFS blocks distribution twice. When open region 
> on rs B, it need open StoreFileReader and compute HDFS blocks distribution 
> too. So when balance a region, it need compute HDFS blocks distribution three 
> times for per new store file. This will increase the region not serving time 
> and we don't need compute HDFS blocks distribution when close a region.
> The related three methods in HStore.
> 1. validateStoreFile(...)
> 2. commitFile(...)
> 3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15835) HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" RuntimeException when a local instance of HBase is running

2016-05-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298143#comment-15298143
 ] 

Hadoop QA commented on HBASE-15835:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 8m 
27s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.1 
2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 32s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 130m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionServerMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805851/HBASE-15835-v3.patch |
| JIRA Issue | HBASE-15835 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 

[jira] [Commented] (HBASE-14623) Implement dedicated WAL for system tables

2016-05-24 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298159#comment-15298159
 ] 

Matteo Bertozzi commented on HBASE-14623:
-

I think quota manager uses its own quota table for throttling and the namespace 
quota is stored in ZK.
but in general waiting for a table to be available has not much point. since 
when the master is up and running the RS hosting namespace, quota, meta or 
other may go down. so we have to deal anyway with the table not online. the 
Table api does already that with some retry. so there's really no point to have 
a hard check in the beginning and crash the master.

> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: wal
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt, 14623-v2.txt, 14623-v2.txt, 
> 14623-v2.txt, 14623-v3.txt, 14623-v4.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.
> Below is summary of discussion:
> For system table to have its own WAL, we would recover system table faster 
> (fast log split, fast log replay). It would probably benefit 
> AssignmentManager on system table region assignment. At this time, the new 
> AssignmentManager is not planned to change WAL. So the existence of this JIRA 
> is good for overall system, not specific to AssignmentManager.
> There are 3 strategies for implementing system table WAL:
> 1. one WAL for all non-meta system tables
> 2. one WAL for each non-meta system table
> 3. one WAL for each region of non-meta system table
> Currently most system tables are one region table (only ACL table may become 
> big). Choices 2 and 3 basically are the same.
> From implementation point of view, choices 2 and 3 are cleaner than choice 1 
> (as we have already had 1 WAL for META table and we can reuse the logic). 
> With choice 2 or 3, assignment manager performance should not be impacted and 
> it would be easier for assignment manager to assign system table region (eg. 
> without waiting for user table log split to complete for assigning system 
> table region).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15806) An endpoint-based export tool

2016-05-24 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298135#comment-15298135
 ] 

ChiaPing Tsai edited comment on HBASE-15806 at 5/24/16 1:00 PM:


Dear [~yuzhih...@gmail.com], [~mbertozzi], and [~devaraj]

I will address the security issue

Thanks for your valuable comments



was (Author: chia7712):
Dear [~yuzhih...@gmail.com]], [~mbertozzi], and [~devaraj]

I will address the security issue

Thanks for your valuable comments


> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15806) An endpoint-based export tool

2016-05-24 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298135#comment-15298135
 ] 

ChiaPing Tsai commented on HBASE-15806:
---

Dear [~yuzhih...@gmail.com]], [~mbertozzi], and [~devaraj]

I will address the security issue

Thanks for your valuable comments


> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15885) Compute StoreFile HDFS Blocks Distribution when need it

2016-05-24 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-15885:
--

 Summary: Compute StoreFile HDFS Blocks Distribution when need it
 Key: HBASE-15885
 URL: https://issues.apache.org/jira/browse/HBASE-15885
 Project: HBase
  Issue Type: Improvement
  Components: HFile
Affects Versions: 2.0.0
Reporter: Guanghao Zhang


Now when open a StoreFileReader, it always need to compute HDFS blocks 
distribution. But when balance a region, it will increase the region not 
serving time. Because it need first close region on rs A, then open it on rs B. 
When close region, it first preFlush, then flush the new update to a new store 
file. The new store file will first be flushed to tmp directory, then move it 
to column family directory. These need open StoreFileReader twice which means 
it need compute HDFS blocks distribution twice. When open region on rs B, it 
need open StoreFileReader and compute HDFS blocks distribution too. So when 
balance a region, it need compute HDFS blocks distribution three times for per 
new store file. This will increase the region not serving time and we don't 
need compute HDFS blocks distribution when close a region.

The related three methods in HStore.
1. validateStoreFile(...)
2. commitFile(...)
3. openStoreFiles(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298068#comment-15298068
 ] 

Hadoop QA commented on HBASE-15884:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.1 2.5.2 2.6.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 103m 12s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 131m 7s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805844/HBASE-15884-1.patch |
| JIRA Issue | HBASE-15884 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
 |
| git revision | master / 39dc192 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
|  Test 

[jira] [Commented] (HBASE-15806) An endpoint-based export tool

2016-05-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15298066#comment-15298066
 ] 

Hudson commented on HBASE-15806:


SUCCESS: Integrated in HBase-Trunk_matrix #945 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/945/])
HBASE-15806 revert due to discussion on security (tedyu: rev 
39dc19236ecc5d7970cc2f43bbad6725826d778f)
* 
hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ExportProtos.java
* hbase-protocol/pom.xml
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/ExportEndpoint.java
* hbase-protocol/src/main/protobuf/Export.proto


> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15835) HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" RuntimeException when a local instance of HBase is running

2016-05-24 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-15835:
--
Status: Patch Available  (was: Open)

Submitting a revised patch which includes all of the following...

Subtask 1: Remove instances of setting the ports to -1 in existing tests.
The following modules were modified to remove their (now apparently extraneous) 
setting of master-info-port and region-server-port:
{code}
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotCloneIndependence.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestMobDataBlockEncoding.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestExpiredMobFileCleaner.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/mapreduce/TestMobSweepJob.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/mapreduce/TestMobSweepReducer.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/mapreduce/TestMobSweepMapper.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/compactions/TestMobCompactor.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/compactions/TestPartitionedMobCompactor.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestDefaultMobStoreFlusher.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMobStoreScanner.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMobStoreCompaction.java
hbase/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDeleteMobTable.java
hbase/hbase-shell/src/test/rsgroup/org/apache/hadoop/hbase/client/rsgroup/TestShellRSGroups.java
hbase/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/AbstractTestShell.java
{code}

Subtask 2: Add some class-level javadoc.
The following was added to the HBaseTestingUtility class-level javadoc comment:
{code}
* In the configuration properties, default values for master-info-port and
* region-server-port are overridden such that a random port will be assigned 
(thus
* avoiding port contention if another local HBase instance is already running).
{code}

Subtask 3: Add a debug-level logging message for when port values are 
overridden to "-1".
The following code now appears at the end of the main constructor for 
HBaseTestingUtility. Note the "debug" logging that has been added:
{code}
// prevent contention for ports if other hbase thread(s) already running
if (conf != null) {
  if (conf.getInt(HConstants.MASTER_INFO_PORT, 
HConstants.DEFAULT_MASTER_INFOPORT)
  == HConstants.DEFAULT_MASTER_INFOPORT) {
conf.setInt(HConstants.MASTER_INFO_PORT, -1);
LOG.debug("Config property " + HConstants.MASTER_INFO_PORT + " changed 
to -1");
  }
  if (conf.getInt(HConstants.REGIONSERVER_PORT, 
HConstants.DEFAULT_REGIONSERVER_PORT)
  == HConstants.DEFAULT_REGIONSERVER_PORT) {
conf.setInt(HConstants.REGIONSERVER_PORT, -1);
LOG.debug("Config property " + HConstants.REGIONSERVER_PORT + " changed 
to -1");
  }
}
{code}

Subtask 4: Add new method to TestHBaseTestingUtility for testing port overrides.
The following new method assures that port override is taking place when it 
should, and is NOT taking place when it should NOT:
{code}
  @Test
  public void testOverridingOfDefaultPorts() {

// confirm that default port properties being overridden to "-1"
Configuration defaultConfig = HBaseConfiguration.create();
defaultConfig.setInt(HConstants.MASTER_INFO_PORT, 
HConstants.DEFAULT_MASTER_INFOPORT);
defaultConfig.setInt(HConstants.REGIONSERVER_PORT, 
HConstants.DEFAULT_REGIONSERVER_PORT);
HBaseTestingUtility htu = new HBaseTestingUtility(defaultConfig);
assertEquals(-1, htu.getConfiguration().getInt(HConstants.MASTER_INFO_PORT, 
0));
assertEquals(-1, 
htu.getConfiguration().getInt(HConstants.REGIONSERVER_PORT, 0));

// confirm that nonDefault (custom) port settings are NOT overridden
Configuration altConfig = HBaseConfiguration.create();
final int nonDefaultMasterInfoPort = ;
final int nonDefaultRegionServerPort = ;
altConfig.setInt(HConstants.MASTER_INFO_PORT, nonDefaultMasterInfoPort);
altConfig.setInt(HConstants.REGIONSERVER_PORT, nonDefaultRegionServerPort);
htu = new HBaseTestingUtility(altConfig);
assertEquals(nonDefaultMasterInfoPort,
htu.getConfiguration().getInt(HConstants.MASTER_INFO_PORT, 0));
assertEquals(nonDefaultRegionServerPort
, htu.getConfiguration().getInt(HConstants.REGIONSERVER_PORT, 0));
  }
{code}

> HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" 
> RuntimeException when a local instance of HBase is running
> --
>
> 

[jira] [Updated] (HBASE-15835) HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" RuntimeException when a local instance of HBase is running

2016-05-24 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-15835:
--
Attachment: HBASE-15835-v3.patch

> HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" 
> RuntimeException when a local instance of HBase is running
> --
>
> Key: HBASE-15835
> URL: https://issues.apache.org/jira/browse/HBASE-15835
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: easyfix
> Fix For: 2.0.0
>
> Attachments: HBASE-15835-v1.patch, HBASE-15835-v2.patch, 
> HBASE-15835-v3.patch
>
>
> When a MiniCluster is being started with the 
> {{HBaseTestUtility#startMiniCluster}} method (most typically in the context 
> of JUnit testing), if a local HBase instance is already running (or for that 
> matter, another thread with another MiniCluster is already running), the 
> startup will fail with a RuntimeException saying "HMasterAddress already in 
> use", referring explicitly to contention for the same default master info 
> port (16010).
> This problem most recently came up in conjunction with HBASE-14876 and its 
> sub-JIRAs (development of new HBase-oriented Maven archetypes), but this is 
> apparently a known issue to veteran developers, who tend to set up the 
> @BeforeClass sections of their test modules with code similar to the 
> following:
> {code}
> UTIL = HBaseTestingUtility.createLocalHTU();
> // disable UI's on test cluster.
> UTIL.getConfiguration().setInt("hbase.master.info.port", -1);
> UTIL.getConfiguration().setInt("hbase.regionserver.info.port", -1);
> UTIL.startMiniCluster();
> {code}
> A comprehensive solution modeled on this should be put directly into 
> HBaseTestUtility's main constructor, using one of the following options:
> OPTION 1 (always force random port assignment):
> {code}
> this.getConfiguration().setInt(HConstants.MASTER_INFO_PORT, -1);
> this.getConfiguration().setInt(HConstants.REGIONSERVER_PORT, -1);
> {code}
> OPTION 2 (always force random port assignment if user has not explicitly 
> defined alternate port):
> {code}
> Configuration conf = this.getConfiguration();
> if (conf.getInt(HConstants.MASTER_INFO_PORT, 
> HConstants.DEFAULT_MASTER_INFOPORT)
> == HConstants.DEFAULT_MASTER_INFOPORT) {
>   conf.setInt(HConstants.MASTER_INFO_PORT, -1);
> }
> if (conf.getInt(HConstants.REGIONSERVER_PORT, 
> HConstants.DEFAULT_REGIONSERVER_PORT)
> == HConstants.DEFAULT_REGIONSERVER_PORT) {
>   conf.setInt(HConstants.REGIONSERVER_PORT, -1);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15835) HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" RuntimeException when a local instance of HBase is running

2016-05-24 Thread Daniel Vimont (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Vimont updated HBASE-15835:
--
Status: Open  (was: Patch Available)

> HBaseTestingUtility#startMiniCluster throws "HMasterAddress already in use" 
> RuntimeException when a local instance of HBase is running
> --
>
> Key: HBASE-15835
> URL: https://issues.apache.org/jira/browse/HBASE-15835
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: easyfix
> Fix For: 2.0.0
>
> Attachments: HBASE-15835-v1.patch, HBASE-15835-v2.patch
>
>
> When a MiniCluster is being started with the 
> {{HBaseTestUtility#startMiniCluster}} method (most typically in the context 
> of JUnit testing), if a local HBase instance is already running (or for that 
> matter, another thread with another MiniCluster is already running), the 
> startup will fail with a RuntimeException saying "HMasterAddress already in 
> use", referring explicitly to contention for the same default master info 
> port (16010).
> This problem most recently came up in conjunction with HBASE-14876 and its 
> sub-JIRAs (development of new HBase-oriented Maven archetypes), but this is 
> apparently a known issue to veteran developers, who tend to set up the 
> @BeforeClass sections of their test modules with code similar to the 
> following:
> {code}
> UTIL = HBaseTestingUtility.createLocalHTU();
> // disable UI's on test cluster.
> UTIL.getConfiguration().setInt("hbase.master.info.port", -1);
> UTIL.getConfiguration().setInt("hbase.regionserver.info.port", -1);
> UTIL.startMiniCluster();
> {code}
> A comprehensive solution modeled on this should be put directly into 
> HBaseTestUtility's main constructor, using one of the following options:
> OPTION 1 (always force random port assignment):
> {code}
> this.getConfiguration().setInt(HConstants.MASTER_INFO_PORT, -1);
> this.getConfiguration().setInt(HConstants.REGIONSERVER_PORT, -1);
> {code}
> OPTION 2 (always force random port assignment if user has not explicitly 
> defined alternate port):
> {code}
> Configuration conf = this.getConfiguration();
> if (conf.getInt(HConstants.MASTER_INFO_PORT, 
> HConstants.DEFAULT_MASTER_INFOPORT)
> == HConstants.DEFAULT_MASTER_INFOPORT) {
>   conf.setInt(HConstants.MASTER_INFO_PORT, -1);
> }
> if (conf.getInt(HConstants.REGIONSERVER_PORT, 
> HConstants.DEFAULT_REGIONSERVER_PORT)
> == HConstants.DEFAULT_REGIONSERVER_PORT) {
>   conf.setInt(HConstants.REGIONSERVER_PORT, -1);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297975#comment-15297975
 ] 

Ted Yu commented on HBASE-15884:


lgtm

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-15884-1.patch
>
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-15884:

Status: Patch Available  (was: Open)

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-15884-1.patch
>
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-15884:

Attachment: HBASE-15884-1.patch

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: HBASE-15884-1.patch
>
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297967#comment-15297967
 ] 

Sergey Soldatov commented on HBASE-15884:
-

Bug valid for 2.0 as well
{noformat}
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.CellComparator.compareRows(CellComparator.java:351)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.skipKVsNewerThanReadpoint(StoreFileScanner.java:259)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:493)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.backwardSeek(StoreFileScanner.java:537)
at 
org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.seekScanners(ReversedStoreScanner.java:82)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.resetScannerStack(StoreScanner.java:800)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.checkReseek(StoreScanner.java:766)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:488)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5737)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5883)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5676)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2743)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34818)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2273)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:116)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
{noformat}

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov reassigned HBASE-15884:
---

Assignee: Sergey Soldatov

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-15884:

Affects Version/s: 2.0.0

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Sergey Soldatov
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-15884:

Affects Version/s: (was: 2.0.0)
   1.2.1

> NPE in StoreFileScanner during reverse scan
> ---
>
> Key: HBASE-15884
> URL: https://issues.apache.org/jira/browse/HBASE-15884
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 1.2.1
>Reporter: Sergey Soldatov
>
> Here is a part of {{skipKVsNewerThanReadpoint}} method:
> {noformat}
>   hfs.next();
>   setCurrentCell(hfs.getKeyValue());
>   if (this.stopSkippingKVsIfNextRow
>   && getComparator().compareRows(cur.getRowArray(), 
> cur.getRowOffset(),
>   cur.getRowLength(), startKV.getRowArray(), 
> startKV.getRowOffset(),
>   startKV.getRowLength()) > 0) {
> {noformat}
> If hfs has no more KVs, cur will be set to Null and on on the next step will 
> throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15884) NPE in StoreFileScanner during reverse scan

2016-05-24 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created HBASE-15884:
---

 Summary: NPE in StoreFileScanner during reverse scan
 Key: HBASE-15884
 URL: https://issues.apache.org/jira/browse/HBASE-15884
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 2.0.0
Reporter: Sergey Soldatov


Here is a part of {{skipKVsNewerThanReadpoint}} method:
{noformat}
  hfs.next();
  setCurrentCell(hfs.getKeyValue());
  if (this.stopSkippingKVsIfNextRow
  && getComparator().compareRows(cur.getRowArray(), cur.getRowOffset(),
  cur.getRowLength(), startKV.getRowArray(), startKV.getRowOffset(),
  startKV.getRowLength()) > 0) {

{noformat}
If hfs has no more KVs, cur will be set to Null and on on the next step will 
throw NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15806) An endpoint-based export tool

2016-05-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297934#comment-15297934
 ] 

Ted Yu commented on HBASE-15806:


Point taken.
Reverted.

ChiaPing:
Mind addressing the security issue ?

> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15830) Sasl encryption doesn't work with AsyncRpcChannelImpl

2016-05-24 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297884#comment-15297884
 ] 

Colin Ma commented on HBASE-15830:
--

hi, [~ghelmling], thanks for the review. I updated the patch according to your 
comments.
Please see my answer for the following comments:
* in getChannelHeaderBytes(AuthMethod authMethod), why not use 
IPCUtil.getTotalSizeWhenWrittenDelimited() instead of hard-coding the extra 4 
bytes?
For every message to RpcServer, the total size should be size of message body + 
4(which is int size). But IPCUtil.getTotalSizeWhenWrittenDelimited() can't 
compute the correct size for RpcServer, so the extra 4 bytes is used.
* Don't we need to write the connection header in both cases?
If qop == auth, the connection header will be written in 
successfulConnectHandler.onSuccess(ctx.channel()). To avoid the deadlocks 
problem(refer the [source 
code|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannelImpl.java#L212]),
 the connection header should be written in SaslClientHandler.channelRead().
For the test case, I haven't tested this on a real cluster, just tested with 
AbstractTestSecureIPC.testSaslWithCommonQop().
The link for reviewboard is in this JIRA, you also can publish the comments 
there. 

> Sasl encryption doesn't work with AsyncRpcChannelImpl
> -
>
> Key: HBASE-15830
> URL: https://issues.apache.org/jira/browse/HBASE-15830
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
> Attachments: HBASE-15830.001.patch, HBASE-15830.002.patch, 
> HBASE-15830.003.patch
>
>
> Currently, sasl encryption doesn't work with AsyncRpcChannelImpl, there has 3 
> problems:
> 1. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslClientHandler.java#L308]
>  will throw the following exception:
> java.lang.UnsupportedOperationException: direct buffer
>   at 
> io.netty.buffer.UnpooledUnsafeDirectByteBuf.array(UnpooledUnsafeDirectByteBuf.java:199)
>   at 
> org.apache.hadoop.hbase.security.SaslClientHandler.write(SaslClientHandler.java:308)
> 2. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannelImpl.java#L212]
>  has deadlocks problem.
> 3. TestAsyncSecureIPC doesn't cover the sasl encryption test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15830) Sasl encryption doesn't work with AsyncRpcChannelImpl

2016-05-24 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HBASE-15830:
-
Attachment: HBASE-15830.003.patch

> Sasl encryption doesn't work with AsyncRpcChannelImpl
> -
>
> Key: HBASE-15830
> URL: https://issues.apache.org/jira/browse/HBASE-15830
> Project: HBase
>  Issue Type: Bug
>Reporter: Colin Ma
> Attachments: HBASE-15830.001.patch, HBASE-15830.002.patch, 
> HBASE-15830.003.patch
>
>
> Currently, sasl encryption doesn't work with AsyncRpcChannelImpl, there has 3 
> problems:
> 1. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslClientHandler.java#L308]
>  will throw the following exception:
> java.lang.UnsupportedOperationException: direct buffer
>   at 
> io.netty.buffer.UnpooledUnsafeDirectByteBuf.array(UnpooledUnsafeDirectByteBuf.java:199)
>   at 
> org.apache.hadoop.hbase.security.SaslClientHandler.write(SaslClientHandler.java:308)
> 2. 
> [sourcecode|https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/AsyncRpcChannelImpl.java#L212]
>  has deadlocks problem.
> 3. TestAsyncSecureIPC doesn't cover the sasl encryption test case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14623) Implement dedicated WAL for system tables

2016-05-24 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297805#comment-15297805
 ] 

Stephen Yuan Jiang commented on HBASE-14623:


I believe the quota manager needs namespace table to be up.  


> Implement dedicated WAL for system tables
> -
>
> Key: HBASE-14623
> URL: https://issues.apache.org/jira/browse/HBASE-14623
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: wal
> Fix For: 2.0.0
>
> Attachments: 14623-v1.txt, 14623-v2.txt, 14623-v2.txt, 14623-v2.txt, 
> 14623-v2.txt, 14623-v3.txt, 14623-v4.txt
>
>
> As Stephen suggested in parent JIRA, dedicating separate WAL for system 
> tables (other than hbase:meta) should be done in new JIRA.
> This task is to fulfill the system WAL separation.
> Below is summary of discussion:
> For system table to have its own WAL, we would recover system table faster 
> (fast log split, fast log replay). It would probably benefit 
> AssignmentManager on system table region assignment. At this time, the new 
> AssignmentManager is not planned to change WAL. So the existence of this JIRA 
> is good for overall system, not specific to AssignmentManager.
> There are 3 strategies for implementing system table WAL:
> 1. one WAL for all non-meta system tables
> 2. one WAL for each non-meta system table
> 3. one WAL for each region of non-meta system table
> Currently most system tables are one region table (only ACL table may become 
> big). Choices 2 and 3 basically are the same.
> From implementation point of view, choices 2 and 3 are cleaner than choice 1 
> (as we have already had 1 WAL for META table and we can reuse the logic). 
> With choice 2 or 3, assignment manager performance should not be impacted and 
> it would be easier for assignment manager to assign system table region (eg. 
> without waiting for user table log split to complete for assigning system 
> table region).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)