[jira] [Commented] (HBASE-15472) replication_admin_test creates a table it doesn't use

2016-03-30 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219416#comment-15219416
 ] 

Ashish Singhi commented on HBASE-15472:
---

+1

> replication_admin_test creates a table it doesn't use
> -
>
> Key: HBASE-15472
> URL: https://issues.apache.org/jira/browse/HBASE-15472
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, shell
>Affects Versions: 2.0.0, 1.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
>  Labels: replication, shell
> Attachments: HBASE-15472.patch
>
>
> I noticed while adding tests to replication_admin_test.rb for HBASE-12940 
> that it creates an HBase table "hbase_shell_tests_table" that is never used 
> in any of the suite's tests. Removing the table creation statements speeds up 
> the suite locally from 1min 10s to 2s. 
> Note that until HBASE-14562 is worked, this test suite doesn't run as part of 
> the automatic test runs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15424) Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL

2016-03-30 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219392#comment-15219392
 ] 

Ashish Singhi commented on HBASE-15424:
---

branch-1 patch requires no change. 
HBASE-15265 was committed only in master branch.


> Add bulk load hfile-refs for replication in ZK after the event is appended in 
> the WAL
> -
>
> Key: HBASE-15424
> URL: https://issues.apache.org/jira/browse/HBASE-15424
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.0
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15424.branch-1.patch, HBASE-15424.patch, 
> HBASE-15424.v1.patch, HBASE-15424.v1.patch, HBASE-15424.v2.patch, 
> HBASE-15424.v2.patch, HBASE-15424.v2.patch, HBASE-15424.v3.patch
>
>
> Currenlty hfile-refs znode used for tracking the bulk loaded data replication 
> is added first and then the bulk load event in appended in the WAL. So this 
> may lead to a issue where the znode is added in ZK but append to WAL is 
> failed(due to some probelm in DN), so this znode will be left in ZK as it is 
> and will not allow hfile to get deleted from archive directory. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14613) Remove MemStoreChunkPool?

2016-03-30 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219386#comment-15219386
 ] 

Jingcheng Du commented on HBASE-14613:
--

Supplement the G1GC settings used when I did the test in case some one needs it.
-XX:+UseG1GC -Xms64g -Xmx64g -XX:+AlwaysPreTouch -XX:+UnlockDiagnosticVMOptions 
-XX:+UnlockExperimentalVMOptions -XX:MaxGCPauseMillis=100 
-XX:ParallelGCThreads=38 -XX:ConcGCThreads=22 -XX:G1HeapWastePercent=20 
-XX:G1NewSizePercent=2 -XX:G1MaxNewSizePercent=20 -XX:+ParallelRefProcEnabled 
-XX:+PrintAdaptiveSizePolicy -XX:+PrintFlagsFinal -XX:+PrintGCDateStamps 
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintReferenceGC 
-XX:+PrintTenuringDistribution -Dcom.sun.management.jmxremote=true 
-Dcom.sun.management.jmxremote.port=19090 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.autodiscovery=true -Xloggc:/xxx/gc.log

> Remove MemStoreChunkPool?
> -
>
> Key: HBASE-14613
> URL: https://issues.apache.org/jira/browse/HBASE-14613
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: 14613-0.98.txt, gc.png, writes.png
>
>
> I just stumbled across MemStoreChunkPool. The idea behind is to reuse chunks 
> of allocations rather than letting the GC handle this.
> Now, it's off by default, and it seems to me to be of dubious value. I'd 
> recommend just removing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14920) Compacting Memstore

2016-03-30 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219382#comment-15219382
 ] 

Eshcar Hillel commented on HBASE-14920:
---

I recently updated YCSB to support also delete operations.
The delete operations in the benchmark I ran ended up as cells of type 
*DeleteFamily*. I would expect tombstones to be of type Delete. 
This could be an issue with my YCSB client, so we can ignore this for the 
moment. 
Anyone else had the same problem before?
What exactly is the affect of a cell of type DeleteFamily in a normal disk 
compaction? does it remove all the entries in the column family?

[~anoop.hbase] The patch is in RB (there's a link to it in the Jira).

An in-memory compaction removes entries from the memory, much like a flush to 
disk would do.
The only reason to keep records in WAL is when data is *not yet* persistent on 
disk. 
If we remove data from memory (during in-memory compaction) so it will never 
arrive to disk (since a more recent version already exists), no point in 
keeping the records in WAL, and it can be removed from it.

To summarize this point, in the case of a compacting memstore tombstones are 
not removed during in-memory compaction (this is the equivalent of minor 
compaction) and need to wait till they hit the disk to be removed in a major 
compaction.

> Compacting Memstore
> ---
>
> Key: HBASE-14920
> URL: https://issues.apache.org/jira/browse/HBASE-14920
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-14920-V01.patch, HBASE-14920-V02.patch, 
> move.to.junit4.patch
>
>
> Implementation of a new compacting memstore with non-optimized immutable 
> segment representation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219379#comment-15219379
 ] 

ramkrishna.s.vasudevan commented on HBASE-15485:


bq.moreCellsInRow
Already this does the byte check and the BATCH_LIMIT_REACHED is added only 
after that. so it should be fine.

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch, HBASE-15485-v2.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-15485:
--
Attachment: HBASE-15485-v2.patch

Change the state and add a test

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch, HBASE-15485-v2.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-03-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219369#comment-15219369
 ] 

Anoop Sam John commented on HBASE-15437:


Why?  We calc the responseSize before this.  Looks like the warn message 
intended to show both the time and response size in it.
You are checking with single row gets case..  What abt scan and multi gets. 
Those were already using cellBlocks as payload way.  There also we missed 
cellBlocks size from calc the total response size?  

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219363#comment-15219363
 ] 

Anoop Sam John commented on HBASE-15485:


That looks simple and perfect.. Ya IMO also, it will be better to change the 
state.. Otherwise we will end up adding more bytes checks .
Pls add tests for all corner case. (Including the batch size reached = row end 
reached = scan end reached)

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15569) Make Bytes.toStringBinary faster

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219361#comment-15219361
 ] 

Hadoop QA commented on HBASE-15569:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 4s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 42s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 2s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796232/HBASE-15569.patch |
| JIRA Issue | HBASE-15569 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d6fd859 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
| 

[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219359#comment-15219359
 ] 

Phil Yang commented on HBASE-15485:
---

In RegionScannerImpl.populateResult, we can add this:
{code}
   if (!moreCellsInRow) 
incrementCountOfRowsScannedMetric(scannerContext);
-  if (scannerContext.checkBatchLimit(limitScope)) {
+  if (moreCellsInRow && scannerContext.checkBatchLimit(limitScope)) {
 return 
scannerContext.setScannerState(NextState.BATCH_LIMIT_REACHED).hasMoreValues();
{code}

It will pass the test I added (not included in v1 patch), have not run all 
tests.

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219355#comment-15219355
 ] 

Anoop Sam John commented on HBASE-15485:


bq.the other is change this state to MORE_VALUES because this row is end.
To do this what extra we need to do?  Sorry not checking the code 

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219352#comment-15219352
 ] 

ramkrishna.s.vasudevan commented on HBASE-15485:


bq.When batched read happening and batch size is say 5 and one row is having 5 
cells in it, which state we will be in? Whether we will call filter rest then 
also correctly? Add a test?
This is a very good comment. 
bq.I think it will be BATCH_LIMIT_REACHED in your scene rather than MORE_VALUES.
I think it will be BATCH_LIMIT_REACHED right? I don't think before saying 
BATCH_LIMIT_REACHED we check for the next cell. 
bq.We will get NO_MORE_VALUES iff the scan is finishing?
I think yes, ony when the stopRow is reached I think.

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219348#comment-15219348
 ] 

Phil Yang commented on HBASE-15485:
---

If we have 5 cells in a row and setBatch(5), we will get BATCH_LIMIT_REACHED 
after scanning the last cell of this row (if it is the last row, we will get 
NO_MORE_VALUES as expected), and we will not reset the filter in v1 patch, so 
when we scanning the next row, the filter will be wrong.

There may be two way to fix this, one is keep the state to BATCH_LIMIT_REACHED 
and judge if it is the last part of this row additionally, the other is change 
this state to MORE_VALUES because this row is end.

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15506) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-30 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219340#comment-15219340
 ] 

Lars Hofhansl commented on HBASE-15506:
---

nginx is implemented in C/C++.
We can easily find about as many articles and examples for it as we can find 
them against it.

I am only saying that a statement like "heap allocations are bad" is simply not 
generally true.
Let's test, have some numbers, and I will happily shut up about it (in fact if 
I was wrong I will have learned something)


> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HBASE-15506
> URL: https://issues.apache.org/jira/browse/HBASE-15506
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>
> Deep inside stack trace in DFSOutputStream.createPacket.
> This should be opened in HDFS. This JIRA is to track HDFS work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15537) Add multi WAL support for AsyncFSWAL

2016-03-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219335#comment-15219335
 ] 

Sean Busbey commented on HBASE-15537:
-

IMHO, nesting WALProviders is the correct way to do this. up until HBASE-14448, 
multiwal did that nesting; i.e. GroupingProvider just handled mapping Regions 
to WALProvider instances and the actual WAL was configurable.

Probably the changes in HBASE-14448 should have cached WAL instances instead of 
FSHLog instances. If you switch to that and put back in the delegate WAL 
creation from before HBASE-14448 then adding in a provider that creates 
AsyncFSWAL instead of the default FSHLog should be straight forward.

> Add multi WAL support for AsyncFSWAL
> 
>
> Key: HBASE-15537
> URL: https://issues.apache.org/jira/browse/HBASE-15537
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15537.patch
>
>
> The multi WAL should not be bound with {{FSHLog}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219327#comment-15219327
 ] 

Anoop Sam John commented on HBASE-15485:


U mean when !MORE_VALUES call the reset API?
But this is wrt more data for the scan rather than more cells in the same row 
right?   We will get NO_MORE_VALUES iff the scan is finishing?

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-7912) HBase Backup/Restore Based on HBase Snapshot

2016-03-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219325#comment-15219325
 ] 

Vladimir Rodionov commented on HBASE-7912:
--

{quote}
We had discussed offline about this, and I thought the plan was to use the 
backupId as the leading dir name.
{quote}

-1. It is as it is. There is a reason for this layout: each table has a list 
(set) of backup sessions. All  depends on this layout.

{quote}
You have not used and obsolite fields in the PB structures. Since this is new 
work, whatever is not used and not needed should be removed from the patches.
{quote}

OK, this can be done, but some "unused" fields contains backup session stats. 
We do not use it now, but in a future?

{quote}
BackupImage itself just duplicates the information that we already have in the 
manifest files. Do we really need that structure at all. Can we instead keep a 
list of backup_ids and read the manifests at the time of restore?
{quote}

Need to estimate the work.

{quote}
full backup t1 with backup_id = bid1
incremental backup t2 with backup_id = bid2
incremental backup t1 with backup_id = bid3
{quote}

When you do incremental backup on t2, system automatically enhance backup set 
and adds ALL tables to it, which have at least one full backup image. So it 
will add t1 to t2. This is for free, because we copy only WAL files. No data 
loss.








> HBase Backup/Restore Based on HBase Snapshot
> 
>
> Key: HBASE-7912
> URL: https://issues.apache.org/jira/browse/HBASE-7912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Richard Ding
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBaseBackupRestore-Jira-7912-DesignDoc-v1.pdf, 
> HBaseBackupRestore-Jira-7912-DesignDoc-v2.pdf, 
> HBaseBackupRestore-Jira-7912-v4.pdf, HBaseBackupRestore-Jira-7912-v5 .pdf, 
> HBaseBackupRestore-Jira-7912-v6.pdf, HBaseBackupandRestore.pdf, 
> HBase_BackupRestore-Jira-7912-CLI-v1.pdf
>
>
> Finally, we completed the implementation of our backup/restore solution, and 
> would like to share with community through this jira. 
> We are leveraging existing hbase snapshot feature, and provide a general 
> solution to common users. Our full backup is using snapshot to capture 
> metadata locally and using exportsnapshot to move data to another cluster; 
> the incremental backup is using offline-WALplayer to backup HLogs; we also 
> leverage global distribution rolllog and flush to improve performance; other 
> added-on values such as convert, merge, progress report, and CLI commands. So 
> that a common user can backup hbase data without in-depth knowledge of hbase. 
>  Our solution also contains some usability features for enterprise users. 
> The detail design document and CLI command will be attached in this jira. We 
> plan to use 10~12 subtasks to share each of the following features, and 
> document the detail implement in the subtasks: 
> * *Full Backup* : provide local and remote back/restore for a list of tables
> * *offline-WALPlayer* to convert HLog to HFiles offline (for incremental 
> backup)
> * *distributed* Logroll and distributed flush 
> * Backup *Manifest* and history
> * *Incremental* backup: to build on top of full backup as daily/weekly backup 
> * *Convert*  incremental backup WAL files into hfiles
> * *Merge* several backup images into one(like merge weekly into monthly)
> * *add and remove* table to and from Backup image
> * *Cancel* a backup process
> * backup progress *status*
> * full backup based on *existing snapshot*
> *-*
> *Below is the original description, to keep here as the history for the 
> design and discussion back in 2013*
> There have been attempts in the past to come up with a viable HBase 
> backup/restore solution (e.g., HBASE-4618).  Recently, there are many 
> advancements and new features in HBase, for example, FileLink, Snapshot, and 
> Distributed Barrier Procedure. This is a proposal for a backup/restore 
> solution that utilizes these new features to achieve better performance and 
> consistency. 
>  
> A common practice of backup and restore in database is to first take full 
> baseline backup, and then periodically take incremental backup that capture 
> the changes since the full baseline backup. HBase cluster can store massive 
> amount data.  Combination of full backups with incremental backups has 
> tremendous benefit for HBase as well.  The following is a typical scenario 
> for full and incremental backup.
> # The user takes a full backup of a table or a set of tables in HBase. 
> # The user schedules periodical incremental backups to capture the changes 
> from the full backup, or from last 

[jira] [Updated] (HBASE-15569) Make Bytes.toStringBinary faster

2016-03-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15569:
--
Status: Patch Available  (was: Open)

> Make Bytes.toStringBinary faster
> 
>
> Key: HBASE-15569
> URL: https://issues.apache.org/jira/browse/HBASE-15569
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
>Priority: Minor
> Attachments: HBASE-15569.patch
>
>
> Bytes.toStringBinary is quite expensive due to its use of {{String.format}}. 
> It seems to me that {{String.format}} is overkill for the purpose and I could 
> actually make the function up to 45-times faster by replacing the part with a 
> simpler hand-crafted code.
> This is probably a non-issue for HBase server as the function is not used in 
> performance-sensitive contexts but I figured it wouldn't hurt to make it 
> faster as it's widely used in builtin tools - Shell, {{HFilePrettyPrinter}} 
> with {{-p}} option, etc. - and it can be used in clients.
> h4. Background:
> We have [an HBase monitoring 
> tool|https://github.com/kakao/hbase-region-inspector] that periodically 
> collects the information of the regions and it calls {{Bytes.toStringBinary}} 
> during the process to make some information suitable for display. Profiling 
> revealed that a large portion of the processing time was spent in 
> {{String.format}}.
> h4. Micro-benchmark:
> {code}
> byte[] bytes = new byte[256];
> for (int i = 0; i < bytes.length; ++i) {
>   // Mixture of printable and non-printable characters.
>   // Maximal performance gain (45x) is observed when the array is solely
>   // composed of non-printable characters.
>   bytes[i] = (byte) i;
> }
> long started = System.nanoTime();
> for (int i = 0; i < 100; ++i) {
>   Bytes.toStringBinary(bytes);
> }
> System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
> started));
> {code}
> - Without the patch: 134176 ms
> - With the patch: 3890 ms
> I made sure that the new version returns the same value as before and 
> simplified the check for non-printable characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15569) Make Bytes.toStringBinary faster

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219324#comment-15219324
 ] 

stack commented on HBASE-15569:
---

Smile. +1

> Make Bytes.toStringBinary faster
> 
>
> Key: HBASE-15569
> URL: https://issues.apache.org/jira/browse/HBASE-15569
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
>Priority: Minor
> Attachments: HBASE-15569.patch
>
>
> Bytes.toStringBinary is quite expensive due to its use of {{String.format}}. 
> It seems to me that {{String.format}} is overkill for the purpose and I could 
> actually make the function up to 45-times faster by replacing the part with a 
> simpler hand-crafted code.
> This is probably a non-issue for HBase server as the function is not used in 
> performance-sensitive contexts but I figured it wouldn't hurt to make it 
> faster as it's widely used in builtin tools - Shell, {{HFilePrettyPrinter}} 
> with {{-p}} option, etc. - and it can be used in clients.
> h4. Background:
> We have [an HBase monitoring 
> tool|https://github.com/kakao/hbase-region-inspector] that periodically 
> collects the information of the regions and it calls {{Bytes.toStringBinary}} 
> during the process to make some information suitable for display. Profiling 
> revealed that a large portion of the processing time was spent in 
> {{String.format}}.
> h4. Micro-benchmark:
> {code}
> byte[] bytes = new byte[256];
> for (int i = 0; i < bytes.length; ++i) {
>   // Mixture of printable and non-printable characters.
>   // Maximal performance gain (45x) is observed when the array is solely
>   // composed of non-printable characters.
>   bytes[i] = (byte) i;
> }
> long started = System.nanoTime();
> for (int i = 0; i < 100; ++i) {
>   Bytes.toStringBinary(bytes);
> }
> System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
> started));
> {code}
> - Without the patch: 134176 ms
> - With the patch: 3890 ms
> I made sure that the new version returns the same value as before and 
> simplified the check for non-printable characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15569) Make Bytes.toStringBinary faster

2016-03-30 Thread Junegunn Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junegunn Choi updated HBASE-15569:
--
Attachment: HBASE-15569.patch

> Make Bytes.toStringBinary faster
> 
>
> Key: HBASE-15569
> URL: https://issues.apache.org/jira/browse/HBASE-15569
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
>Priority: Minor
> Attachments: HBASE-15569.patch
>
>
> Bytes.toStringBinary is quite expensive due to its use of {{String.format}}. 
> It seems to me that {{String.format}} is overkill for the purpose and I could 
> actually make the function up to 45-times faster by replacing the part with a 
> simpler hand-crafted code.
> This is probably a non-issue for HBase server as the function is not used in 
> performance-sensitive contexts but I figured it wouldn't hurt to make it 
> faster as it's widely used in builtin tools - Shell, {{HFilePrettyPrinter}} 
> with {{-p}} option, etc. - and it can be used in clients.
> h4. Background:
> We have [an HBase monitoring 
> tool|https://github.com/kakao/hbase-region-inspector] that periodically 
> collects the information of the regions and it calls {{Bytes.toStringBinary}} 
> during the process to make some information suitable for display. Profiling 
> revealed that a large portion of the processing time was spent in 
> {{String.format}}.
> h4. Micro-benchmark:
> {code}
> byte[] bytes = new byte[256];
> for (int i = 0; i < bytes.length; ++i) {
>   // Mixture of printable and non-printable characters.
>   // Maximal performance gain (45x) is observed when the array is solely
>   // composed of non-printable characters.
>   bytes[i] = (byte) i;
> }
> long started = System.nanoTime();
> for (int i = 0; i < 100; ++i) {
>   Bytes.toStringBinary(bytes);
> }
> System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
> started));
> {code}
> - Without the patch: 134176 ms
> - With the patch: 3890 ms
> I made sure that the new version returns the same value as before and 
> simplified the check for non-printable characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15569) Make Bytes.toStringBinary faster

2016-03-30 Thread Junegunn Choi (JIRA)
Junegunn Choi created HBASE-15569:
-

 Summary: Make Bytes.toStringBinary faster
 Key: HBASE-15569
 URL: https://issues.apache.org/jira/browse/HBASE-15569
 Project: HBase
  Issue Type: Improvement
  Components: Performance
Reporter: Junegunn Choi
Assignee: Junegunn Choi
Priority: Minor


Bytes.toStringBinary is quite expensive due to its use of {{String.format}}. It 
seems to me that {{String.format}} is overkill for the purpose and I could 
actually make the function up to 45-times faster by replacing the part with a 
simpler hand-crafted code.

This is probably a non-issue for HBase server as the function is not used in 
performance-sensitive contexts but I figured it wouldn't hurt to make it faster 
as it's widely used in builtin tools - Shell, {{HFilePrettyPrinter}} with 
{{-p}} option, etc. - and it can be used in clients.

h4. Background:

We have [an HBase monitoring 
tool|https://github.com/kakao/hbase-region-inspector] that periodically 
collects the information of the regions and it calls {{Bytes.toStringBinary}} 
during the process to make some information suitable for display. Profiling 
revealed that a large portion of the processing time was spent in 
{{String.format}}.

h4. Micro-benchmark:

{code}
byte[] bytes = new byte[256];
for (int i = 0; i < bytes.length; ++i) {
  // Mixture of printable and non-printable characters.
  // Maximal performance gain (45x) is observed when the array is solely
  // composed of non-printable characters.
  bytes[i] = (byte) i;
}
long started = System.nanoTime();
for (int i = 0; i < 100; ++i) {
  Bytes.toStringBinary(bytes);
}
System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - started));
{code}

- Without the patch: 134176 ms
- With the patch: 3890 ms

I made sure that the new version returns the same value as before and 
simplified the check for non-printable characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15537) Add multi WAL support for AsyncFSWAL

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219310#comment-15219310
 ] 

stack commented on HBASE-15537:
---

[~busbey] Your opinion appreciated here.

A provider inside a provider sounds right -- it is the proper separation -- but 
not sure our APIs as they are are amenable to cascading like this. Given it was 
Sean who did the first cut at multi, my guess is that he considered this and 
probably has some better input than I here.

Does this current patch work? If it does, would be worth trying a compare. 
Async might help make better define the case for multiwal.

> Add multi WAL support for AsyncFSWAL
> 
>
> Key: HBASE-15537
> URL: https://issues.apache.org/jira/browse/HBASE-15537
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15537.patch
>
>
> The multi WAL should not be bound with {{FSHLog}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219295#comment-15219295
 ] 

stack commented on HBASE-15536:
---

Running this compare from HBASE-10156

{code}
$ for i in 1 3 5 10 25 50 100 200; do for j in 1; do perf stat ./bin/hbase 
--config /home/stack/conf_hbase 
org.apache.hadoop.hbase.wal.WALPerformanceEvaluation -threads $i   -iterations 
100 -keySize 50 -valueSize 100  &> "/tmp/nopatch2${i}.${j}.txt"; done; done
{code}

Here is table of completion times (I mangled recording 200 threads w/ no patch):

||Threads||Defaul||Async||Diff||
|1|837|228|3.7x|
|3|647|274|2.4x|
|5|609|310|2x|
|10|916|376|2.5x|
|25|1177|556|2.1x|
|50|1463|828|1.8x|
|100|1902|1382|1.4x|
|200|-|2445|-|

Comparing perf stat for ten threads, you can see the async is doing less work. 
Here is the default WAL provider stat output:
{code}
4016  Performance counter stats for './hbase/bin/hbase --config 
/home/stack/conf_hbase org.apache.hadoop.hbase.wal.WALPerformanceEvaluation 
-threads 10 -iterations 100 -keySize 50 -valueSize 100':
4017
4018 3473402.908284 task-clock (msec) #3.791 CPUs utilized
4019 79,614,165 context-switches  #0.023 M/sec
4020  4,927,049 cpu-migrations#0.001 M/sec
4021  1,390,882 page-faults   #0.400 K/sec
4022  7,457,646,572,542 cycles#2.147 GHz
4023 stalled-cycles-frontend
4024 stalled-cycles-backend
4025  2,088,450,796,192 instructions  #0.28  insns per cycle
4026340,979,920,761 branches  #   98.169 M/sec
4027  5,330,989,389 branch-misses #1.56% of all branches
4028
4029  916.241768049 seconds time elapsed
{code}

Here is running with async enabled:
{code}
1050  Performance counter stats for './hbase/bin/hbase --config 
/home/stack/conf_hbase org.apache.hadoop.hbase.wal.WALPerformanceEvaluation 
-threads 10 -iterations 100 -keySize 50 -valueSize 100':
1051
1052 2624320.161261 task-clock (msec) #6.964 CPUs utilized
1053 13,097,786 context-switches  #0.005 M/sec
1054202,869 cpu-migrations#0.077 K/sec
1055968,708 page-faults   #0.369 K/sec
1056  6,814,056,657,994 cycles#2.597 GHz
1057 stalled-cycles-frontend
1058 stalled-cycles-backend
1059  1,577,447,250,139 instructions  #0.23  insns per cycle
1060244,194,003,573 branches  #   93.050 M/sec
1061  2,927,181,625 branch-misses #1.20% of all branches
1062
1063  376.825234068 seconds time elapsed
{code}

80M context-switches vs 13M, 7.5B cycles vs 6.8B. There is loads of work to be 
done in here -- our numbers for ins per cycle are pretty abysmal -- but seems 
like clear benefit to running async provider though it doesn't look much when 
you look at macro-level with YCSB load with 50 threads.

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219292#comment-15219292
 ] 

Hudson commented on HBASE-15324:


FAILURE: Integrated in HBase-1.3 #629 (See 
[https://builds.apache.org/job/HBase-1.3/629/])
HBASE-15324 Jitter may cause desiredMaxFileSize overflow in (stack: rev 
f8d41f9a2f0e794e2de45debf97b8fa14a8c5d8e)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java


> Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy 
> and trigger unexpected split
> --
>
> Key: HBASE-15324
> URL: https://issues.apache.org/jira/browse/HBASE-15324
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.3
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, 
> HBASE-15324_v3.patch, HBASE-15324_v3.patch
>
>
> We introduce jitter for region split decision in HBASE-13412, but the 
> following line in {{ConstantSizeRegionSplitPolicy}} may cause long value 
> overflow if MAX_FILESIZE is specified to Long.MAX_VALUE:
> {code}
> this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 
> 0.5D) * jitter);
> {code}
> In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target 
> region to split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15568) Procedure V2 - Remove CreateTableHandler in HBase Apache 2.0 release

2016-03-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-15568:
---
Description: 
With the 'create table' and 'clone snapshot' operation move to Procedure-V2 
based implementation in 2.0 release.  There is no need to keep the 
handler-based implementation.

Note: this JIRA is Apache 2.0 release only.

  was:With create table and clone snapshot moves to Procedure-V2 based 
implementation in 2.0 release.  There is no need to keep the handler-based 
implementation.


> Procedure V2 - Remove CreateTableHandler in HBase Apache 2.0 release
> 
>
> Key: HBASE-15568
> URL: https://issues.apache.org/jira/browse/HBASE-15568
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
>
> With the 'create table' and 'clone snapshot' operation move to Procedure-V2 
> based implementation in 2.0 release.  There is no need to keep the 
> handler-based implementation.
> Note: this JIRA is Apache 2.0 release only.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15485) Filter.reset() should not be called between batches

2016-03-30 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219270#comment-15219270
 ] 

Phil Yang commented on HBASE-15485:
---

I think it will be BATCH_LIMIT_REACHED in your scene rather than MORE_VALUES. 
It may cause a bug on resetFilter and I'll fix it.
And should we change the state to MORE_VALUES in this case?

> Filter.reset() should not be called between batches
> ---
>
> Key: HBASE-15485
> URL: https://issues.apache.org/jira/browse/HBASE-15485
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-15485-v1.patch
>
>
> As discussed in HBASE-15325, now we will resetFilters if partial result not 
> formed, but we should not reset filters when batch limit reached



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-7912) HBase Backup/Restore Based on HBase Snapshot

2016-03-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219257#comment-15219257
 ] 

Enis Soztutar edited comment on HBASE-7912 at 3/31/16 3:18 AM:
---

Thanks [~vrodionov] for updating the design doc for layout and backup image 
formats. It helps understanding the patches better. 

We had discussed offline about this, and I thought the plan was to use the 
{{backupId}} as the leading dir name. Instead of 
{code}ROOT/default/test­1459375580152/backup_1459375618126/archive/data/default/test­1459375
580152/543f8c02c388dd931fb9bcd1c38e7372/f/a6ce4789f9b444d89bbc755254afd27d{code},
 I think we should use the same directory layout of the {{hbase.rootdir}} under 
the backup root dir: 

So for full backups it should look like this: 
{code}ROOT/backup_1459375618126/archive/data/default/test­1459375
580152/543f8c02c388dd931fb9bcd1c38e7372/f/a6ce4789f9b444d89bbc755254afd27d{code}.
 And for incremental backups the layout should still follow the same 
{{hbase.rootdir}} layout. So instead of: 
{code}ROOT/WALs/backup_1459378688723/10.22.11.177%2C58809%2C1459378626079.14593786
59124{code}
it should be: 
{code}ROOT/backup_1459378688723/WALs/10.22.11.177,58809,1459378626079/10.22.11.177%2C58809%2C1459378626079.14593786
59124{code}. Notice that there is an extra  in the layout as well. 

This structure will allow the rest of the code base (like hfile links, etc) to 
work seamlessly and also will help with a large number of files under WALs. 
Also, all the table in the table set for a backup will be hosted together etc. 

You have {{not used}} and {{obsolite}} fields in the PB structures. Since this 
is new work, whatever is not used and not needed should be removed from the 
patches. 

The backup images contain this: {{required string root_dir = 3;}}, which I 
think we should remove. The problem with having the absolute path in the 
manifests is that, it will make the directory un-relocatable. The issue is that 
if the operator does rename, or otherwise changes NN info etc, then it will be 
silent data loss. I think we should make it so that every path in image / 
manifest is relative, and all ancestors are implicitly under the same remote 
backup location. 

This is from the doc: 
bq. There was concern that we can lose data in between incremental backup 
sessions and this why the tracking of already copied WAL files has been added, 
but it turned out that is not necessary to do this because we ALWAYS include 
ALL tables which have at least one backup session into final backup table list 
for incremental backup.
Without this, the issue with HBASE-15442 will still happen, no? With the 
dependency generation algorithm as in the doc, if I have: 
 - full backup t1 with backup_id = bid1
 - incremental backup t2 with backup_id = bid2
 - incremental backup t1 with backup_id = bid3

then bid3 will NOT depend on bid2, so it is data loss still, no? 

{{BackupImage}} itself just duplicates the information that we already have in 
the manifest files. Do we really need that structure at all. Can we instead 
keep a list of backup_ids and read the manifests at the time of restore? 



was (Author: enis):
Thanks [~vrodionov] for updating the design doc for layout and backup image 
formats. It helps understanding the patches better. 

We had discussed offline about this, and I thought the plan was to use the 
{{backupId}} as the leading dir name. Instead of 
{{ROOT/default/test­1459375580152/backup_1459375618126/archive/data/default/test­1459375
580152/543f8c02c388dd931fb9bcd1c38e7372/f/a6ce4789f9b444d89bbc755254afd27d}}, I 
think we should use the same directory layout of the {{hbase.rootdir}} under 
the backup root dir: 

So for full backups it should look like this: 
{{ROOT/backup_1459375618126/archive/data/default/test­1459375
580152/543f8c02c388dd931fb9bcd1c38e7372/f/a6ce4789f9b444d89bbc755254afd27d}}. 
And for incremental backups the layout should still follow the same 
{{hbase.rootdir}} layout. So instead of: 
{{ROOT/WALs/backup_1459378688723/10.22.11.177%2C58809%2C1459378626079.14593786
59124}}
it should be: 
{{ROOT/backup_1459378688723/WALs/10.22.11.177,58809,1459378626079/10.22.11.177%2C58809%2C1459378626079.14593786
59124}}. Notice that there is an extra  in the layout as well. 

This structure will allow the rest of the code base (like hfile links, etc) to 
work seamlessly and also will help with a large number of files under WALs. 
Also, all the table in the table set for a backup will be hosted together etc. 

You have {{not used}} and {{obsolite}} fields in the PB structures. Since this 
is new work, whatever is not used and not needed should be removed from the 
patches. 

The backup images contain this: {{required string root_dir = 3;}}, which I 
think we should remove. The problem with having the absolute path in the 
manifests is that, it will make the directory un-relocatable. The issue is 

[jira] [Commented] (HBASE-7912) HBase Backup/Restore Based on HBase Snapshot

2016-03-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219257#comment-15219257
 ] 

Enis Soztutar commented on HBASE-7912:
--

Thanks [~vrodionov] for updating the design doc for layout and backup image 
formats. It helps understanding the patches better. 

We had discussed offline about this, and I thought the plan was to use the 
{{backupId}} as the leading dir name. Instead of 
{{ROOT/default/test­1459375580152/backup_1459375618126/archive/data/default/test­1459375
580152/543f8c02c388dd931fb9bcd1c38e7372/f/a6ce4789f9b444d89bbc755254afd27d}}, I 
think we should use the same directory layout of the {{hbase.rootdir}} under 
the backup root dir: 

So for full backups it should look like this: 
{{ROOT/backup_1459375618126/archive/data/default/test­1459375
580152/543f8c02c388dd931fb9bcd1c38e7372/f/a6ce4789f9b444d89bbc755254afd27d}}. 
And for incremental backups the layout should still follow the same 
{{hbase.rootdir}} layout. So instead of: 
{{ROOT/WALs/backup_1459378688723/10.22.11.177%2C58809%2C1459378626079.14593786
59124}}
it should be: 
{{ROOT/backup_1459378688723/WALs/10.22.11.177,58809,1459378626079/10.22.11.177%2C58809%2C1459378626079.14593786
59124}}. Notice that there is an extra  in the layout as well. 

This structure will allow the rest of the code base (like hfile links, etc) to 
work seamlessly and also will help with a large number of files under WALs. 
Also, all the table in the table set for a backup will be hosted together etc. 

You have {{not used}} and {{obsolite}} fields in the PB structures. Since this 
is new work, whatever is not used and not needed should be removed from the 
patches. 

The backup images contain this: {{required string root_dir = 3;}}, which I 
think we should remove. The problem with having the absolute path in the 
manifests is that, it will make the directory un-relocatable. The issue is that 
if the operator does rename, or otherwise changes NN info etc, then it will be 
silent data loss. I think we should make it so that every path in image / 
manifest is relative, and all ancestors are implicitly under the same remote 
backup location. 

This is from the doc: 
bq. There was concern that we can lose data in between incremental backup 
sessions and this why the tracking of already copied WAL files has been added, 
but it turned out that is not necessary to do this because we ALWAYS include 
ALL tables which have at least one backup session into final backup table list 
for incremental backup.
Without this, the issue with HBASE-15442 will still happen, no? With the 
dependency generation algorithm as in the doc, if I have: 
 - full backup t1 with backup_id = bid1
 - incremental backup t2 with backup_id = bid2
 - incremental backup t1 with backup_id = bid3

then bid3 will NOT depend on bid2, so it is data loss still, no? 

{{BackupImage}} itself just duplicates the information that we already have in 
the manifest files. Do we really need that structure at all. Can we instead 
keep a list of backup_ids and read the manifests at the time of restore? 


> HBase Backup/Restore Based on HBase Snapshot
> 
>
> Key: HBASE-7912
> URL: https://issues.apache.org/jira/browse/HBASE-7912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Richard Ding
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBaseBackupRestore-Jira-7912-DesignDoc-v1.pdf, 
> HBaseBackupRestore-Jira-7912-DesignDoc-v2.pdf, 
> HBaseBackupRestore-Jira-7912-v4.pdf, HBaseBackupRestore-Jira-7912-v5 .pdf, 
> HBaseBackupRestore-Jira-7912-v6.pdf, HBaseBackupandRestore.pdf, 
> HBase_BackupRestore-Jira-7912-CLI-v1.pdf
>
>
> Finally, we completed the implementation of our backup/restore solution, and 
> would like to share with community through this jira. 
> We are leveraging existing hbase snapshot feature, and provide a general 
> solution to common users. Our full backup is using snapshot to capture 
> metadata locally and using exportsnapshot to move data to another cluster; 
> the incremental backup is using offline-WALplayer to backup HLogs; we also 
> leverage global distribution rolllog and flush to improve performance; other 
> added-on values such as convert, merge, progress report, and CLI commands. So 
> that a common user can backup hbase data without in-depth knowledge of hbase. 
>  Our solution also contains some usability features for enterprise users. 
> The detail design document and CLI command will be attached in this jira. We 
> plan to use 10~12 subtasks to share each of the following features, and 
> document the detail implement in the subtasks: 
> * *Full Backup* : provide local and remote back/restore for a list of tables
> * *offline-WALPlayer* to convert HLog to HFiles offline (for incremental 
> 

[jira] [Updated] (HBASE-15538) Implement secure async protobuf wal writer

2016-03-30 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15538:
--
Component/s: wal

> Implement secure async protobuf wal writer
> --
>
> Key: HBASE-15538
> URL: https://issues.apache.org/jira/browse/HBASE-15538
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15538-v1.patch, HBASE-15538-v2.patch, 
> HBASE-15538.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15499) Add multiple data type support for increment

2016-03-30 Thread He Liangliang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Liangliang updated HBASE-15499:
--
Attachment: HBASE-15499-V5.diff

> Add multiple data type support for increment
> 
>
> Key: HBASE-15499
> URL: https://issues.apache.org/jira/browse/HBASE-15499
> Project: HBase
>  Issue Type: New Feature
>  Components: API
>Reporter: He Liangliang
>Assignee: He Liangliang
> Attachments: HBASE-15499-V2.diff, HBASE-15499-V3.diff, 
> HBASE-15499-V4.diff, HBASE-15499-V5.diff, HBASE-15499.diff
>
>
> Currently the increment assumes long with byte-wise serialization. It's 
> useful to  support flexible data type/serializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15538) Implement secure async protobuf wal writer

2016-03-30 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-15538:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: 
Add the following config in hbase-site.xml if you want to use secure protobuf 
wal writer together with AsyncFSWAL
{code}

hbase.regionserver.hlog.async.writer.impl
org.apache.hadoop.hbase.regionserver.wal.SecureAsyncProtobufLogWriter


{code}
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks [~stack] for reviewing.

> Implement secure async protobuf wal writer
> --
>
> Key: HBASE-15538
> URL: https://issues.apache.org/jira/browse/HBASE-15538
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15538-v1.patch, HBASE-15538-v2.patch, 
> HBASE-15538.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219215#comment-15219215
 ] 

Ted Yu commented on HBASE-15411:


One of the review comments was to drop call to Admin#execProcedure().
This boils down to calling MasterProcedureManager#execProcedure().

I want to get confirmation whether we can use MasterProcedureUtil for this 
purpose or, create another helper class in hbase-server module.

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v12.txt, 
> 15411-v13.txt, 15411-v14.txt, 15411-v15.txt, 15411-v16.txt, 15411-v18.txt, 
> 15411-v22.txt, 15411-v3.txt, 15411-v5.txt, 15411-v6.txt, 15411-v7.txt, 
> 15411-v9.txt, FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)



[jira] [Commented] (HBASE-15537) Add multi WAL support for AsyncFSWAL

2016-03-30 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219209#comment-15219209
 ] 

Duo Zhang commented on HBASE-15537:
---

Talked with [~carp84] offline, he had some concerns on the structure that we 
nest WALProviders inside another WALProvider.

Actually we have two separated logics here. One is how the WALs are grouped(One 
WAL for all, or group WALs by region), and the other is how to create a 
specific WAL(FSHLog, AsyncFSWAL, etc.).

In the current architecture, all the logics are placed in WALProvider so the 
proper way to address this issue is to nest WALProvider inside another 
WALProvider. And I think it is better to separate the two logics into different 
classes, but it requires a big refactoring.

What do you think? [~stack]

Thanks.

> Add multi WAL support for AsyncFSWAL
> 
>
> Key: HBASE-15537
> URL: https://issues.apache.org/jira/browse/HBASE-15537
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15537.patch
>
>
> The multi WAL should not be bound with {{FSHLog}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219195#comment-15219195
 ] 

Hudson commented on HBASE-15324:


SUCCESS: Integrated in HBase-1.4 #63 (See 
[https://builds.apache.org/job/HBase-1.4/63/])
HBASE-15324 Jitter may cause desiredMaxFileSize overflow in (stack: rev 
407e644607eba96132d9fa27857000ee9cb1dc20)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java


> Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy 
> and trigger unexpected split
> --
>
> Key: HBASE-15324
> URL: https://issues.apache.org/jira/browse/HBASE-15324
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.3
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, 
> HBASE-15324_v3.patch, HBASE-15324_v3.patch
>
>
> We introduce jitter for region split decision in HBASE-13412, but the 
> following line in {{ConstantSizeRegionSplitPolicy}} may cause long value 
> overflow if MAX_FILESIZE is specified to Long.MAX_VALUE:
> {code}
> this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 
> 0.5D) * jitter);
> {code}
> In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target 
> region to split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15535) HBase Reference Guide has an incorrect link for Trafodion

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219188#comment-15219188
 ] 

Hadoop QA commented on HBASE-15535:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 26s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 126m 57s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 170m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796185/HBASE-15535.1.patch |
| JIRA Issue | HBASE-15535 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  |
| uname | Linux proserpina.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9d56105 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1232/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/1232/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1232/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1232/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> HBase Reference Guide has an incorrect link for Trafodion
> -
>
> Key: HBASE-15535
> URL: https://issues.apache.org/jira/browse/HBASE-15535
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Atanu Mishra
>Priority: Blocker
> Attachments: HBASE-15535.1.patch
>
>
> Appendix F in the HBase Reference Guide available here 
> (https://hbase.apache.org/book.html#sql) has an incorrect link to the site 
> for Trafodion.
> The new Trafodion URL is: http://trafodion.incubator.apache.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15563) 'counter' may overflow in BoundedGroupingStrategy

2016-03-30 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219184#comment-15219184
 ] 

Duo Zhang commented on HBASE-15563:
---

Nice.

> 'counter' may overflow in BoundedGroupingStrategy
> -
>
> Key: HBASE-15563
> URL: https://issues.apache.org/jira/browse/HBASE-15563
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Reporter: Duo Zhang
>Priority: Minor
>  Labels: beginner
>
> {code}
> groupName = groupNames[counter.getAndIncrement() % groupNames.length];
> {code}
> Theoretically, counter can overflow and becomes negative then causes an 
> ArrayIndexOutOfBoundsException.
> But in practice, we need 2 billions different identifiers to make this 
> happen, and before the overflow we will run into OOM because of a huge 
> groupNameCache...
> So not sure if it is worth to fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-7912) HBase Backup/Restore Based on HBase Snapshot

2016-03-30 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-7912:
-
Attachment: HBaseBackupandRestore.pdf

Updated design doc:

Added description of backup directory layout, backup manifest and backup image.

> HBase Backup/Restore Based on HBase Snapshot
> 
>
> Key: HBASE-7912
> URL: https://issues.apache.org/jira/browse/HBASE-7912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Richard Ding
>Assignee: Vladimir Rodionov
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: HBaseBackupRestore-Jira-7912-DesignDoc-v1.pdf, 
> HBaseBackupRestore-Jira-7912-DesignDoc-v2.pdf, 
> HBaseBackupRestore-Jira-7912-v4.pdf, HBaseBackupRestore-Jira-7912-v5 .pdf, 
> HBaseBackupRestore-Jira-7912-v6.pdf, HBaseBackupandRestore.pdf, 
> HBase_BackupRestore-Jira-7912-CLI-v1.pdf
>
>
> Finally, we completed the implementation of our backup/restore solution, and 
> would like to share with community through this jira. 
> We are leveraging existing hbase snapshot feature, and provide a general 
> solution to common users. Our full backup is using snapshot to capture 
> metadata locally and using exportsnapshot to move data to another cluster; 
> the incremental backup is using offline-WALplayer to backup HLogs; we also 
> leverage global distribution rolllog and flush to improve performance; other 
> added-on values such as convert, merge, progress report, and CLI commands. So 
> that a common user can backup hbase data without in-depth knowledge of hbase. 
>  Our solution also contains some usability features for enterprise users. 
> The detail design document and CLI command will be attached in this jira. We 
> plan to use 10~12 subtasks to share each of the following features, and 
> document the detail implement in the subtasks: 
> * *Full Backup* : provide local and remote back/restore for a list of tables
> * *offline-WALPlayer* to convert HLog to HFiles offline (for incremental 
> backup)
> * *distributed* Logroll and distributed flush 
> * Backup *Manifest* and history
> * *Incremental* backup: to build on top of full backup as daily/weekly backup 
> * *Convert*  incremental backup WAL files into hfiles
> * *Merge* several backup images into one(like merge weekly into monthly)
> * *add and remove* table to and from Backup image
> * *Cancel* a backup process
> * backup progress *status*
> * full backup based on *existing snapshot*
> *-*
> *Below is the original description, to keep here as the history for the 
> design and discussion back in 2013*
> There have been attempts in the past to come up with a viable HBase 
> backup/restore solution (e.g., HBASE-4618).  Recently, there are many 
> advancements and new features in HBase, for example, FileLink, Snapshot, and 
> Distributed Barrier Procedure. This is a proposal for a backup/restore 
> solution that utilizes these new features to achieve better performance and 
> consistency. 
>  
> A common practice of backup and restore in database is to first take full 
> baseline backup, and then periodically take incremental backup that capture 
> the changes since the full baseline backup. HBase cluster can store massive 
> amount data.  Combination of full backups with incremental backups has 
> tremendous benefit for HBase as well.  The following is a typical scenario 
> for full and incremental backup.
> # The user takes a full backup of a table or a set of tables in HBase. 
> # The user schedules periodical incremental backups to capture the changes 
> from the full backup, or from last incremental backup.
> # The user needs to restore table data to a past point of time.
> # The full backup is restored to the table(s) or to different table name(s).  
> Then the incremental backups that are up to the desired point in time are 
> applied on top of the full backup. 
> We would support the following key features and capabilities.
> * Full backup uses HBase snapshot to capture HFiles.
> * Use HBase WALs to capture incremental changes, but we use bulk load of 
> HFiles for fast incremental restore.
> * Support single table or a set of tables, and column family level backup and 
> restore.
> * Restore to different table names.
> * Support adding additional tables or CF to backup set without interruption 
> of incremental backup schedule.
> * Support rollup/combining of incremental backups into longer period and 
> bigger incremental backups.
> * Unified command line interface for all the above.
> The solution will support HBase backup to FileSystem, either on the same 
> cluster or across clusters.  It has the flexibility to support backup to 
> other devices and servers in the future.  



[jira] [Commented] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219130#comment-15219130
 ] 

Hudson commented on HBASE-14983:


FAILURE: Integrated in HBase-Trunk_matrix #816 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/816/])
HBASE-14983 Create metrics for per block type hit/miss ratios (eclark: rev 
a71ce6e7382e8af0c8a005897093a8ab1ac9a492)
* 
hbase-external-blockcache/src/main/java/org/apache/hadoop/hbase/io/hfile/MemcachedBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java


> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-14983-branch-1.patch, HBASE-14983-v1.patch, 
> HBASE-14983-v10.patch, HBASE-14983-v2.patch, HBASE-14983-v3.patch, 
> HBASE-14983-v4.patch, HBASE-14983-v5.patch, HBASE-14983-v6.patch, 
> HBASE-14983-v7.patch, HBASE-14983-v8.patch, HBASE-14983-v9.patch, 
> HBASE-14983.patch, Screen Shot 2015-12-15 at 3.33.09 PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15566) Add timeouts on TestMobFlushSnapshotFromClient and TestRegionMergeTransactionOnCluster

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219129#comment-15219129
 ] 

Hudson commented on HBASE-15566:


FAILURE: Integrated in HBase-Trunk_matrix #816 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/816/])
HBASE-15566 Add timeouts on TestMobFlushSnapshotFromClient and (stack: rev 
21301a8a956285179b59b309a31d1155c6673e25)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobFlushSnapshotFromClient.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestMasterFailoverWithProcedures.java


> Add timeouts on TestMobFlushSnapshotFromClient and 
> TestRegionMergeTransactionOnCluster
> --
>
> Key: HBASE-15566
> URL: https://issues.apache.org/jira/browse/HBASE-15566
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: timeouts.patch
>
>
> Looking at recent timeouts, these two tests are missing timeout or the 
> timeouts are not fit for their category.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219128#comment-15219128
 ] 

Hudson commented on HBASE-15324:


FAILURE: Integrated in HBase-Trunk_matrix #816 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/816/])
HBASE-15324 Jitter may cause desiredMaxFileSize overflow in (stack: rev 
9d56105eece2d34922ae1c230308193cd0e9b29f)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java


> Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy 
> and trigger unexpected split
> --
>
> Key: HBASE-15324
> URL: https://issues.apache.org/jira/browse/HBASE-15324
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.3
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, 
> HBASE-15324_v3.patch, HBASE-15324_v3.patch
>
>
> We introduce jitter for region split decision in HBASE-13412, but the 
> following line in {{ConstantSizeRegionSplitPolicy}} may cause long value 
> overflow if MAX_FILESIZE is specified to Long.MAX_VALUE:
> {code}
> this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 
> 0.5D) * jitter);
> {code}
> In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target 
> region to split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15559) BaseMasterAndRegionObserver doesn't implement all the methods

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219127#comment-15219127
 ] 

Hudson commented on HBASE-15559:


FAILURE: Integrated in HBase-Trunk_matrix #816 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/816/])
HBASE-15559 Fix  BaseMasterAndRegionObserver doesn't implement all the (eclark: 
rev b18de5ef4545bda4558b950c96cd4be79f9567af)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java


> BaseMasterAndRegionObserver doesn't implement all the methods
> -
>
> Key: HBASE-15559
> URL: https://issues.apache.org/jira/browse/HBASE-15559
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15559-v1.patch, HBASE-15559-v2.patch, 
> HBASE-15559.patch
>
>
> It's supposed to be a class that allows someone to derive from that class and 
> only need to implement the desired methods. However two methods aren't 
> implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15549) Undo HMaster carrying regions in master branch as default

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219097#comment-15219097
 ] 

stack commented on HBASE-15549:
---

Balancer gets messed up if master is not carrying regions:

{code}
hbase(main):008:0* balancer force
NameError: undefined local variable or method `force' for #

hbase(main):009:0> balancer 'force'

ERROR: java.io.IOException: 6
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2285)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 6
at 
org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.getLocalityOfRegion(BaseLoadBalancer.java:869)
at 
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer$LocalityCostFunction.cost(StochasticLoadBalancer.java:1186)
at 
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer.computeCost(StochasticLoadBalancer.java:521)
at 
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer.balanceCluster(StochasticLoadBalancer.java:335)
at 
org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer.balanceCluster(StochasticLoadBalancer.java:264)
at org.apache.hadoop.hbase.master.HMaster.balance(HMaster.java:1276)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.balance(MasterRpcServices.java:352)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:60465)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2240)
... 4 more
{code}

This is w/ my setting tables for master to 'none' (which makes for some nice 
stack traces in logs complaining about no such table 'none')

> Undo HMaster carrying regions in master branch as default
> -
>
> Key: HBASE-15549
> URL: https://issues.apache.org/jira/browse/HBASE-15549
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer, master
>Reporter: stack
> Attachments: 15549.patch
>
>
> I thought we had an issue to do this but I can't find it so here is a new one.
> Currently, in master branch, HMaster is a RegionServer and carries meta 
> regions such as hbase:meta. Disable this facility by default until better 
> thought through (I think we should undo master ever carrying regions; FBers 
> are thinking we should go forward with notion that master carries meta/system 
> regions. TODO). I want to disable it because this facility is not finished 
> and meantime I want to test new stuff coming in on master branch w/o this new 
> feature getting in the way.
> Parking here a patch that has how to disable master carrying regions and a 
> probably redundant test that ensures all works when master is not carrying 
> meta regions (added because it WASN'T working for me -- but was pilot error).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15567) TestReplicationShell broken by recent replication changes

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219061#comment-15219061
 ] 

Hadoop QA commented on HBASE-15567:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 3s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 3s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 57s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 7s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796179/HBASE-15567.patch |
| JIRA Issue | HBASE-15567 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux proserpina.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9d56105 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1231/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1231/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> TestReplicationShell broken by recent replication changes
> -
>
> Key: HBASE-15567
> URL: https://issues.apache.org/jira/browse/HBASE-15567
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, shell
>Affects Versions: 2.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Attachments: HBASE-15567.patch
>
>
> Recent changes to the Ruby shell's add_peer method in HBASE-11393 have broken 
> TestReplicationShell, which went unnoticed because it's currently Ignored as 
> flaky. This test is useful when developing extensions to the replication 
> shell commands, and should be kept working (and hopefully re-enabled in the 
> near future.) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219032#comment-15219032
 ] 

Hadoop QA commented on HBASE-14983:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
46s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s 
{color} | {color:green} branch-1 passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 28s {color} 
| {color:red} hbase-server-jdk1.8.0 with JDK v1.8.0 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 5m 28s {color} 
| {color:red} hbase-server-jdk1.8.0 with JDK v1.8.0 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 25s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 25s {color} 
| {color:red} hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 6 new + 6 
unchanged - 0 fixed = 12 total (was 6) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 
33s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 27s 
{color} | {color:red} hbase-hadoop2-compat-jdk1.8.0 with JDK v1.8.0 generated 1 
new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 27s 
{color} | {color:red} hbase-hadoop2-compat-jdk1.8.0 with JDK v1.8.0 generated 1 
new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| 

[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219027#comment-15219027
 ] 

Hudson commented on HBASE-15324:


SUCCESS: Integrated in HBase-1.3-IT #590 (See 
[https://builds.apache.org/job/HBase-1.3-IT/590/])
HBASE-15324 Jitter may cause desiredMaxFileSize overflow in (stack: rev 
407e644607eba96132d9fa27857000ee9cb1dc20)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.java


> Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy 
> and trigger unexpected split
> --
>
> Key: HBASE-15324
> URL: https://issues.apache.org/jira/browse/HBASE-15324
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.3
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, 
> HBASE-15324_v3.patch, HBASE-15324_v3.patch
>
>
> We introduce jitter for region split decision in HBASE-13412, but the 
> following line in {{ConstantSizeRegionSplitPolicy}} may cause long value 
> overflow if MAX_FILESIZE is specified to Long.MAX_VALUE:
> {code}
> this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 
> 0.5D) * jitter);
> {code}
> In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target 
> region to split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219028#comment-15219028
 ] 

Hudson commented on HBASE-14983:


SUCCESS: Integrated in HBase-1.3-IT #590 (See 
[https://builds.apache.org/job/HBase-1.3-IT/590/])
HBASE-14983 Create metrics for per block type hit/miss ratios (eclark: rev 
75d46e46975ece130f621a8c7baf246ab58d824f)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* 
hbase-external-blockcache/src/main/java/org/apache/hadoop/hbase/io/hfile/MemcachedBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java


> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-14983-branch-1.patch, HBASE-14983-v1.patch, 
> HBASE-14983-v10.patch, HBASE-14983-v2.patch, HBASE-14983-v3.patch, 
> HBASE-14983-v4.patch, HBASE-14983-v5.patch, HBASE-14983-v6.patch, 
> HBASE-14983-v7.patch, HBASE-14983-v8.patch, HBASE-14983-v9.patch, 
> HBASE-14983.patch, Screen Shot 2015-12-15 at 3.33.09 PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15529) Override needBalance in StochasticLoadBalancer

2016-03-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219015#comment-15219015
 ] 

Ted Yu commented on HBASE-15529:


{code}
124   private float minCostNeedBalance = 0.05f;
{code}
How is the default of 0.05 determined ?

Did you put tar ball patched with this change on real cluster ?
What do you observe ?

Thanks

> Override needBalance in StochasticLoadBalancer
> --
>
> Key: HBASE-15529
> URL: https://issues.apache.org/jira/browse/HBASE-15529
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HBASE-15529-v1.patch, HBASE-15529.patch
>
>
> StochasticLoadBalancer includes cost functions to compute the cost of region 
> rount, r/w qps, table load, region locality, memstore size, and storefile 
> size. Every cost function returns a number between 0 and 1 inclusive and the 
> computed costs are scaled by their respective multipliers. The bigger 
> multiplier means that the respective cost function have the bigger weight. 
> But needBalance decide whether to balance only by region count and doesn't 
> consider r/w qps, locality even you config these cost function with bigger 
> multiplier. StochasticLoadBalancer should override needBalance and decide 
> whether to balance by it's configs of cost functions.
> Add one new config hbase.master.balancer.stochastic.minCostNeedBalance, 
> cluster need balance when (total cost / sum multiplier) > minCostNeedBalance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15535) HBase Reference Guide has an incorrect link for Trafodion

2016-03-30 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-15535:
-
Release Note: HBASE-15535 Correct link to Trafodion
  Status: Patch Available  (was: Open)

> HBase Reference Guide has an incorrect link for Trafodion
> -
>
> Key: HBASE-15535
> URL: https://issues.apache.org/jira/browse/HBASE-15535
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Atanu Mishra
>Priority: Blocker
> Attachments: HBASE-15535.1.patch
>
>
> Appendix F in the HBase Reference Guide available here 
> (https://hbase.apache.org/book.html#sql) has an incorrect link to the site 
> for Trafodion.
> The new Trafodion URL is: http://trafodion.incubator.apache.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15535) HBase Reference Guide has an incorrect link for Trafodion

2016-03-30 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HBASE-15535:
-
Attachment: HBASE-15535.1.patch

> HBase Reference Guide has an incorrect link for Trafodion
> -
>
> Key: HBASE-15535
> URL: https://issues.apache.org/jira/browse/HBASE-15535
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Atanu Mishra
>Priority: Blocker
> Attachments: HBASE-15535.1.patch
>
>
> Appendix F in the HBase Reference Guide available here 
> (https://hbase.apache.org/book.html#sql) has an incorrect link to the site 
> for Trafodion.
> The new Trafodion URL is: http://trafodion.incubator.apache.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15327) Canary will always invoke admin.balancer() in each sniffing period when writeSniffing is enabled

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219010#comment-15219010
 ] 

Hudson commented on HBASE-15327:


FAILURE: Integrated in HBase-1.4 #62 (See 
[https://builds.apache.org/job/HBase-1.4/62/])
HBASE-15327 Canary will always invoke admin.balancer() in each sniffing (tedyu: 
rev e339bec3f15707fc4a2d464befeac485c08ad21d)
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java


> Canary will always invoke admin.balancer() in each sniffing period when 
> writeSniffing is enabled
> 
>
> Key: HBASE-15327
> URL: https://issues.apache.org/jira/browse/HBASE-15327
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15327-branch-1-v1.patch, HBASE-15327-trunk.patch, 
> HBASE-15327-trunk.patch, HBASE-15327-v1.patch
>
>
> When Canary#writeSniffing is enabled, Canary#checkWriteTableDistribution will 
> make sure the regions of write table distributed on all region servers as:
> {code}
>   int numberOfServers = admin.getClusterStatus().getServers().size();
>   ..
>   int numberOfCoveredServers = serverSet.size();
>   if (numberOfCoveredServers < numberOfServers) {
> admin.balancer();
>   }
> {code}
> The master will also work as a regionserver, so that ClusterStatus#getServers 
> will contain the master. On the other hand, write table of Canary will not be 
> assigned to master, making numberOfCoveredServers always smaller than 
> numberOfServers and admin.balancer always be invoked in each sniffing period. 
> This may cause frequent region moves. A simple fix is excluding master from 
> numberOfServers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219011#comment-15219011
 ] 

Hudson commented on HBASE-14983:


FAILURE: Integrated in HBase-1.4 #62 (See 
[https://builds.apache.org/job/HBase-1.4/62/])
HBASE-14983 Create metrics for per block type hit/miss ratios (eclark: rev 
75d46e46975ece130f621a8c7baf246ab58d824f)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
* 
hbase-external-blockcache/src/main/java/org/apache/hadoop/hbase/io/hfile/MemcachedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheStats.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheKey.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCombinedBlockCache.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java


> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-14983-branch-1.patch, HBASE-14983-v1.patch, 
> HBASE-14983-v10.patch, HBASE-14983-v2.patch, HBASE-14983-v3.patch, 
> HBASE-14983-v4.patch, HBASE-14983-v5.patch, HBASE-14983-v6.patch, 
> HBASE-14983-v7.patch, HBASE-14983-v8.patch, HBASE-14983-v9.patch, 
> HBASE-14983.patch, Screen Shot 2015-12-15 at 3.33.09 PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15559) BaseMasterAndRegionObserver doesn't implement all the methods

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219009#comment-15219009
 ] 

Hudson commented on HBASE-15559:


FAILURE: Integrated in HBase-1.4 #62 (See 
[https://builds.apache.org/job/HBase-1.4/62/])
HBASE-15559 Fix  BaseMasterAndRegionObserver doesn't implement all the (eclark: 
rev fee0212da02ee8016f81983619733401b7097336)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java


> BaseMasterAndRegionObserver doesn't implement all the methods
> -
>
> Key: HBASE-15559
> URL: https://issues.apache.org/jira/browse/HBASE-15559
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15559-v1.patch, HBASE-15559-v2.patch, 
> HBASE-15559.patch
>
>
> It's supposed to be a class that allows someone to derive from that class and 
> only need to implement the desired methods. However two methods aren't 
> implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15559) BaseMasterAndRegionObserver doesn't implement all the methods

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15219007#comment-15219007
 ] 

Hudson commented on HBASE-15559:


FAILURE: Integrated in HBase-1.3 #628 (See 
[https://builds.apache.org/job/HBase-1.3/628/])
HBASE-15559 Fix  BaseMasterAndRegionObserver doesn't implement all the (eclark: 
rev b310b02232d64ab07bd44639fe305e7ed035136d)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java


> BaseMasterAndRegionObserver doesn't implement all the methods
> -
>
> Key: HBASE-15559
> URL: https://issues.apache.org/jira/browse/HBASE-15559
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15559-v1.patch, HBASE-15559-v2.patch, 
> HBASE-15559.patch
>
>
> It's supposed to be a class that allows someone to derive from that class and 
> only need to implement the desired methods. However two methods aren't 
> implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15567) TestReplicationShell broken by recent replication changes

2016-03-30 Thread Geoffrey Jacoby (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-15567:

Attachment: HBASE-15567.patch

> TestReplicationShell broken by recent replication changes
> -
>
> Key: HBASE-15567
> URL: https://issues.apache.org/jira/browse/HBASE-15567
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, shell
>Affects Versions: 2.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Attachments: HBASE-15567.patch
>
>
> Recent changes to the Ruby shell's add_peer method in HBASE-11393 have broken 
> TestReplicationShell, which went unnoticed because it's currently Ignored as 
> flaky. This test is useful when developing extensions to the replication 
> shell commands, and should be kept working (and hopefully re-enabled in the 
> near future.) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15567) TestReplicationShell broken by recent replication changes

2016-03-30 Thread Geoffrey Jacoby (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated HBASE-15567:

Status: Patch Available  (was: Open)

This patch fixes several broken tests in replication_admin_test created by 
HBASE-11393's refactoring of the Ruby API for adding a peer, and also removes 
an unused HBase table created in the test that may have been a source of this 
test's prior flakiness. 

I held off on re-enabling TestReplicationShell, but verified that the test 
passes consitently locally. 

[~chenheng], [~enis], FYI. 

> TestReplicationShell broken by recent replication changes
> -
>
> Key: HBASE-15567
> URL: https://issues.apache.org/jira/browse/HBASE-15567
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, shell
>Affects Versions: 2.0.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Attachments: HBASE-15567.patch
>
>
> Recent changes to the Ruby shell's add_peer method in HBASE-11393 have broken 
> TestReplicationShell, which went unnoticed because it's currently Ignored as 
> flaky. This test is useful when developing extensions to the replication 
> shell commands, and should be kept working (and hopefully re-enabled in the 
> near future.) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15568) Procedure V2 - Remove CreateTableHandler in HBase Apache 2.0 release

2016-03-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-15568:
---
Summary: Procedure V2 - Remove CreateTableHandler in HBase Apache 2.0 
release  (was: Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0)

> Procedure V2 - Remove CreateTableHandler in HBase Apache 2.0 release
> 
>
> Key: HBASE-15568
> URL: https://issues.apache.org/jira/browse/HBASE-15568
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
>
> With create table and clone snapshot moves to Procedure-V2 based 
> implementation in 2.0 release.  There is no need to keep the handler-based 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15333) Enhance the filter to handle short, integer, long, float and double

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218990#comment-15218990
 ] 

Hadoop QA commented on HBASE-15333:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
48m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
12s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796160/HBASE-15333-5.patch |
| JIRA Issue | HBASE-15333 |
| Optional Tests |  asflicense  scalac  scaladoc  unit  compile  |
| uname | Linux priapus.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9d56105 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1230/testReport/ |
| modules | C: hbase-spark U: hbase-spark |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1230/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Enhance the filter to handle short, integer, long, float and double
> ---
>
> Key: HBASE-15333
> URL: https://issues.apache.org/jira/browse/HBASE-15333
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15333-1.patch, HBASE-15333-2.patch, 
> HBASE-15333-3.patch, HBASE-15333-4.patch, HBASE-15333-5.patch
>
>
> Currently, the range filter is based on the order of bytes. But for java 
> primitive type, such as short, int, long, double, float, etc, their order is 
> not consistent with their byte order, extra manipulation has to be in place 
> to take care of them  correctly.
> For example, for the integer range (-100, 100), the filter <= 1, the current 
> filter will return 0 and 1, and the right return value should be (-100, 1]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15568) Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0

2016-03-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-15568:
---
Description: With create table and clone snapshot moves to Procedure-V2 
based implementation in 2.0 release.  There is no need to keep the 
handler-based implementation.

> Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0
> 
>
> Key: HBASE-15568
> URL: https://issues.apache.org/jira/browse/HBASE-15568
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
>
> With create table and clone snapshot moves to Procedure-V2 based 
> implementation in 2.0 release.  There is no need to keep the handler-based 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15568) Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0

2016-03-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-15568:
---
Fix Version/s: 2.0.0

> Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0
> 
>
> Key: HBASE-15568
> URL: https://issues.apache.org/jira/browse/HBASE-15568
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
>
> With create table and clone snapshot moves to Procedure-V2 based 
> implementation in 2.0 release.  There is no need to keep the handler-based 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15568) Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0

2016-03-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-15568:
---
Affects Version/s: 2.0.0

> Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0
> 
>
> Key: HBASE-15568
> URL: https://issues.apache.org/jira/browse/HBASE-15568
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
>
> With create table and clone snapshot moves to Procedure-V2 based 
> implementation in 2.0 release.  There is no need to keep the handler-based 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15568) Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0

2016-03-30 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang reassigned HBASE-15568:
--

Assignee: Stephen Yuan Jiang

> Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0
> 
>
> Key: HBASE-15568
> URL: https://issues.apache.org/jira/browse/HBASE-15568
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15568) Procedure V2 - Remove CreateTableHandler in HBASE Apache 2.0

2016-03-30 Thread Stephen Yuan Jiang (JIRA)
Stephen Yuan Jiang created HBASE-15568:
--

 Summary: Procedure V2 - Remove CreateTableHandler in HBASE Apache 
2.0
 Key: HBASE-15568
 URL: https://issues.apache.org/jira/browse/HBASE-15568
 Project: HBase
  Issue Type: Sub-task
Reporter: Stephen Yuan Jiang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15567) TestReplicationShell broken by recent replication changes

2016-03-30 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created HBASE-15567:
---

 Summary: TestReplicationShell broken by recent replication changes
 Key: HBASE-15567
 URL: https://issues.apache.org/jira/browse/HBASE-15567
 Project: HBase
  Issue Type: Bug
  Components: Replication, shell
Affects Versions: 2.0.0
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby
Priority: Minor


Recent changes to the Ruby shell's add_peer method in HBASE-11393 have broken 
TestReplicationShell, which went unnoticed because it's currently Ignored as 
flaky. This test is useful when developing extensions to the replication shell 
commands, and should be kept working (and hopefully re-enabled in the near 
future.) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15564) HashTable job supposedly succeeded but manifest.tmp remain

2016-03-30 Thread Dave Latham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Latham resolved HBASE-15564.
-
Resolution: Invalid

> HashTable job supposedly succeeded but manifest.tmp remain
> --
>
> Key: HBASE-15564
> URL: https://issues.apache.org/jira/browse/HBASE-15564
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Replication
>Affects Versions: 1.2.0
> Environment: Ubuntu 12.04
>Reporter: My Ho
>
> I'm using org.apache.hadoop.hbase.mapreduce.HashTable to create hashes for 
> use in SyncTable.  Occasionally, the job page in jobhistory will say the job 
> succeeded, but in my filesystem, I see "manifest.tmp" instead of the expected 
> "manifest".  According to the code[1], the job must have failed, but I don't 
> see failure anywhere.  
> [1]https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java#L739-L741



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15564) HashTable job supposedly succeeded but manifest.tmp remain

2016-03-30 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218956#comment-15218956
 ] 

Dave Latham commented on HBASE-15564:
-

It's probably best to start on the mailing list with problems you're having, 
and if we discover an actual bug or improvement to be done, then make a JIRA.

In this case, if the HashTable MapReduce job completed but the manifest was 
never renamed from manifest.tmp it seems most likely that the HashTable main 
tool runner process died, was killed, or lost connection to the cluster and so 
was unable to perform that rename step.  Be sure to leave it running until 
after completion.  Since all it does afterward is rename the manifest.tmp file 
after the job succeeds, you can do that yourself if you like.

> HashTable job supposedly succeeded but manifest.tmp remain
> --
>
> Key: HBASE-15564
> URL: https://issues.apache.org/jira/browse/HBASE-15564
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Replication
>Affects Versions: 1.2.0
> Environment: Ubuntu 12.04
>Reporter: My Ho
>
> I'm using org.apache.hadoop.hbase.mapreduce.HashTable to create hashes for 
> use in SyncTable.  Occasionally, the job page in jobhistory will say the job 
> succeeded, but in my filesystem, I see "manifest.tmp" instead of the expected 
> "manifest".  According to the code[1], the job must have failed, but I don't 
> see failure anywhere.  
> [1]https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java#L739-L741



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13639) SyncTable - rsync for HBase tables

2016-03-30 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218943#comment-15218943
 ] 

Dave Latham commented on HBASE-13639:
-

Sounds reasonable to me to add support for configuration to do all the steps in 
one, but you would need to be able to specify which clusters not only to access 
the tables but where to run the MR job, so presumably the Yarn RMs.  If someone 
wants to work that out, +1 from me.

> SyncTable - rsync for HBase tables
> --
>
> Key: HBASE-13639
> URL: https://issues.apache.org/jira/browse/HBASE-13639
> Project: HBase
>  Issue Type: New Feature
>  Components: mapreduce, Operability, tooling
>Reporter: Dave Latham
>Assignee: Dave Latham
>  Labels: tooling
> Fix For: 2.0.0, 0.98.14, 1.2.0
>
> Attachments: HBASE-13639-0.98-addendum-hadoop-1.patch, 
> HBASE-13639-0.98.patch, HBASE-13639-v1.patch, HBASE-13639-v2.patch, 
> HBASE-13639-v3-0.98.patch, HBASE-13639-v3.patch, HBASE-13639.patch
>
>
> Given HBase tables in remote clusters with similar but not identical data, 
> efficiently update a target table such that the data in question is identical 
> to a source table.  Efficiency in this context means using far less network 
> traffic than would be required to ship all the data from one cluster to the 
> other.  Takes inspiration from rsync.
> Design doc: 
> https://docs.google.com/document/d/1-2c9kJEWNrXf5V4q_wBcoIXfdchN7Pxvxv1IO6PW0-U/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2016-03-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218905#comment-15218905
 ] 

Enis Soztutar commented on HBASE-11393:
---

bq. the Ruby method that takes a string rather than a dictionary is still the 
"official" way to create a peer in the HBase shell in the HBase book on 
hbase.apache.org. If nothing else the doc should probably be updated.
Makes sense to also deprecate / remove those ruby methods in favor of the new 
way. +1 for the documentation. 

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v11.patch, HBASE-11393_v12.patch, 
> HBASE-11393_v14.patch, HBASE-11393_v15.patch, HBASE-11393_v16.patch, 
> HBASE-11393_v2.patch, HBASE-11393_v3.patch, HBASE-11393_v4.patch, 
> HBASE-11393_v5.patch, HBASE-11393_v6.patch, HBASE-11393_v7.patch, 
> HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15537) Add multi WAL support for AsyncFSWAL

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218899#comment-15218899
 ] 

stack commented on HBASE-15537:
---

When would there ever be more than one provider? In RegionGroupingProvider, I 
see we had the FSHLog provider only but not you iterate providers... in places 
like shutdown.

Is there a test for multi using async?

Otherwise, looks great.

> Add multi WAL support for AsyncFSWAL
> 
>
> Key: HBASE-15537
> URL: https://issues.apache.org/jira/browse/HBASE-15537
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15537.patch
>
>
> The multi WAL should not be bound with {{FSHLog}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15564) HashTable job supposedly succeeded but manifest.tmp remain

2016-03-30 Thread My Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

My Ho updated HBASE-15564:
--
Description: 
I'm using org.apache.hadoop.hbase.mapreduce.HashTable to create hashes for use 
in SyncTable.  Occasionally, the job page in jobhistory will say the job 
succeeded, but in my filesystem, I see "manifest.tmp" instead of the expected 
"manifest".  According to the code[1], the job must have failed, but I don't 
see failure anywhere.  

[1]https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java#L739-L741

  was:
I'm using org.apache.hadoop.hbase.mapreduce.HashTable to create hashses for use 
in SyncTable.  Occasionally, the job page in jobhistory will say the job 
succeeded, but in my filesystem, I see "manifest.tmp" instead of the expected 
"manifest".  According to the code[1], the job must have failed, but I don't 
see failure anywhere.  

[1]https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java#L739-L741


> HashTable job supposedly succeeded but manifest.tmp remain
> --
>
> Key: HBASE-15564
> URL: https://issues.apache.org/jira/browse/HBASE-15564
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce, Replication
>Affects Versions: 1.2.0
> Environment: Ubuntu 12.04
>Reporter: My Ho
>
> I'm using org.apache.hadoop.hbase.mapreduce.HashTable to create hashes for 
> use in SyncTable.  Occasionally, the job page in jobhistory will say the job 
> succeeded, but in my filesystem, I see "manifest.tmp" instead of the expected 
> "manifest".  According to the code[1], the job must have failed, but I don't 
> see failure anywhere.  
> [1]https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java#L739-L741



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15333) Enhance the filter to handle short, integer, long, float and double

2016-03-30 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-15333:
---
Attachment: HBASE-15333-5.patch

> Enhance the filter to handle short, integer, long, float and double
> ---
>
> Key: HBASE-15333
> URL: https://issues.apache.org/jira/browse/HBASE-15333
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zhan Zhang
>Assignee: Zhan Zhang
> Attachments: HBASE-15333-1.patch, HBASE-15333-2.patch, 
> HBASE-15333-3.patch, HBASE-15333-4.patch, HBASE-15333-5.patch
>
>
> Currently, the range filter is based on the order of bytes. But for java 
> primitive type, such as short, int, long, double, float, etc, their order is 
> not consistent with their byte order, extra manipulation has to be in place 
> to take care of them  correctly.
> For example, for the integer range (-100, 100), the filter <= 1, the current 
> filter will return 0 and 1, and the right return value should be (-100, 1]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15513) hbase.hregion.memstore.chunkpool.maxsize is 0.0 by default

2016-03-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218861#comment-15218861
 ] 

Vladimir Rodionov commented on HBASE-15513:
---

According to this comment

https://issues.apache.org/jira/browse/HBASE-15180?focusedCommentId=15126122=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15126122

MSLAB with chunk pool on is the best performer even with G1GC. 

> hbase.hregion.memstore.chunkpool.maxsize is 0.0 by default
> --
>
> Key: HBASE-15513
> URL: https://issues.apache.org/jira/browse/HBASE-15513
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
>
> That results in excessive MemStoreLAB chunk allocations because we can not 
> reuse them. Not sure, why it has been disabled, by default. May be the code 
> has not been tested well?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11393) Replication TableCfs should be a PB object rather than a string

2016-03-30 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218837#comment-15218837
 ] 

Geoffrey Jacoby commented on HBASE-11393:
-

Thanks for this [~chenheng]; the tracker on the replication peer config is 
already proving useful. 

Has the decision been made to remove "add_peer peer_id, cluster_key" from the 
Ruby shell in addition to the equivalent methods on ReplicationAdmin, as this 
JIRA does? While the ReplicationAdmin methods are marked as Deprecated in 1.0, 
and thus seem fair game to remove in 2.0, the Ruby method that takes a string 
rather than a dictionary is still the "official" way to create a peer in the 
HBase shell in the HBase book on hbase.apache.org. If nothing else the doc 
should probably be updated. 

Its removal as part of this JIRA is also breaking TestReplicationShell 
(currently set as Ignore because it was apparently flaky in the past), but 
which I've been using locally to verify some shell changes I'm working on for 
HBASE-15507. I can fix the test, but want to confirm that it's the test that 
needs to change. 

> Replication TableCfs should be a PB object rather than a string
> ---
>
> Key: HBASE-11393
> URL: https://issues.apache.org/jira/browse/HBASE-11393
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Heng Chen
> Fix For: 2.0.0
>
> Attachments: HBASE-11393.patch, HBASE-11393_v1.patch, 
> HBASE-11393_v10.patch, HBASE-11393_v11.patch, HBASE-11393_v12.patch, 
> HBASE-11393_v14.patch, HBASE-11393_v15.patch, HBASE-11393_v16.patch, 
> HBASE-11393_v2.patch, HBASE-11393_v3.patch, HBASE-11393_v4.patch, 
> HBASE-11393_v5.patch, HBASE-11393_v6.patch, HBASE-11393_v7.patch, 
> HBASE-11393_v8.patch, HBASE-11393_v9.patch
>
>
> We concatenate the list of tables and column families in format  
> "table1:cf1,cf2;table2:cfA,cfB" in zookeeper for table-cf to replication peer 
> mapping. 
> This results in ugly parsing code. We should do this a PB object. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218822#comment-15218822
 ] 

stack commented on HBASE-15536:
---

I did a rough compare where I ran ycsb load against a single server instance 
running over an HDFS. Numbers were hbase 2.0 tip with asyncwal disabled/enabled:

{code}
18096 2016-03-30 13:07:21:600 4330 sec: 61688560 operations; 14370.85 current 
ops/sec; est completion in 7 hours 10 minutes [INSERT: Count=143767, 
Max=201727, Min=916, Avg=3465.51, 90=4615, 99=13639, 99.9=78207, 99.99=200703]
18097 2016-03-30 13:07:31:596 4340 sec: 61833035 operations; 14453.28 current 
ops/sec; est completion in 7 hours 10 minutes [INSERT: Count=144469, 
Max=267007, Min=923, Avg=3463.2, 90=4603, 99=6875, 99.9=152319, 99.99=264959]
18098 2016-03-30 13:07:41:596 4350 sec: 61995299 operations; 16226.4 current 
ops/sec; est completion in 7 hours 9 minutes [INSERT: Count=162271, Max=64063, 
Min=850, Avg=3076.64, 90=4535, 99=6427, 99.9=45983, 99.99=62719]
18099 2016-03-30 13:07:51:596 4360 sec: 62137855 operations; 14255.6 current 
ops/sec; est completion in 7 hours 9 minutes [INSERT: Count=142547, Max=226303, 
Min=880, Avg=3502.47, 90=4571, 99=7707, 99.9=192639, 99.99=225279]
18100 2016-03-30 13:08:01:596 4370 sec: 62290243 operations; 15238.8 current 
ops/sec; est completion in 7 hours 9 minutes [INSERT: Count=152393, Max=231423, 
Min=919, Avg=3275.69, 90=4523, 99=6831, 99.9=88447, 99.99=229503]
18101 2016-03-30 13:08:11:596 4380 sec: 62432949 operations; 14270.6 current 
ops/sec; est completion in 7 hours 9 minutes [INSERT: Count=142698, Max=205311, 
Min=910, Avg=3499.68, 90=4631, 99=17071, 99.9=146687, 99.99=202495]
18102 2016-03-30 13:08:21:596 4390 sec: 62574524 operations; 14157.5 current 
ops/sec; est completion in 7 hours 9 minutes [INSERT: Count=141581, Max=259071, 
Min=918, Avg=3526.68, 90=4443, 99=6627, 99.9=189055, 99.99=257151]
18103 2016-03-30 13:08:31:596 4400 sec: 62710962 operations; 13643.8 current 
ops/sec; est completion in 7 hours 8 minutes [INSERT: Count=136431, Max=220671, 
Min=877, Avg=3606.32, 90=4547, 99=26495, 99.9=194303, 99.99=217855]
18104 2016-03-30 13:08:41:596 4410 sec: 62859312 operations; 14835 current 
ops/sec; est completion in 7 hours 8 minutes [INSERT: Count=148350, Max=258303, 
Min=957, Avg=3413.23, 90=4543, 99=7787, 99.9=121663, 99.99=256255]
18105 2016-03-30 13:08:51:596 4420 sec: 63003208 operations; 14389.6 current 
ops/sec; est completion in 7 hours 8 minutes [INSERT: Count=143896, Max=226559, 
Min=957, Avg=3468.59, 90=4631, 99=7127, 99.9=126783, 99.99=225535]
18106 2016-03-30 13:09:01:596 4430 sec: 63147394 operations; 14418.6 current 
ops/sec; est completion in 7 hours 8 minutes [INSERT: Count=144193, Max=202751, 
Min=931, Avg=3465.31, 90=4443, 99=15271, 99.9=122879, 99.99=200959]
18107 2016-03-30 13:09:11:596 4440 sec: 63283125 operations; 13573.1 current 
ops/sec; est completion in 7 hours 8 minutes [INSERT: Count=135739, Max=527871, 
Min=919, Avg=3678.7, 90=4567, 99=9375, 99.9=159615, 99.99=525311]
{code}

{code}
2016-03-30 13:35:26:261 1130 sec: 18067105 operations; 16813.1 current ops/sec; 
est completion in 7 hours 8 minutes [INSERT: Count=168133, Max=52863, Min=437, 
Avg=2970.54, 90=5899, 99=10199, 99.9=35903, 99.99=50623]
2016-03-30 13:35:36:261 1140 sec: 18223814 operations; 15670.9 current ops/sec; 
est completion in 7 hours 8 minutes [INSERT: Count=156711, Max=175103, Min=426, 
Avg=3185.56, 90=5767, 99=19727, 99.9=80639, 99.99=173055]
2016-03-30 13:35:46:261 1150 sec: 18398146 operations; 17433.2 current ops/sec; 
est completion in 7 hours 8 minutes [INSERT: Count=174328, Max=54815, Min=452, 
Avg=2862.38, 90=5575, 99=8079, 99.9=43615, 99.99=53759]
2016-03-30 13:35:56:262 1160 sec: 18561746 operations; 16360 current ops/sec; 
est completion in 7 hours 8 minutes [INSERT: Count=163605, Max=95423, Min=434, 
Avg=3051.5, 90=5775, 99=19503, 99.9=47967, 99.99=94015]
2016-03-30 13:36:06:261 1170 sec: 18725215 operations; 16346.9 current ops/sec; 
est completion in 7 hours 7 minutes [INSERT: Count=163471, Max=83327, Min=443, 
Avg=3053.24, 90=5787, 99=14263, 99.9=47615, 99.99=80767]
2016-03-30 13:36:16:261 1180 sec: 18875810 operations; 15059.5 current ops/sec; 
est completion in 7 hours 7 minutes [INSERT: Count=150615, Max=452863, Min=434, 
Avg=3314.69, 90=5867, 99=13527, 99.9=83135, 99.99=451327]
2016-03-30 13:36:26:262 1190 sec: 19038377 operations; 16256.7 current ops/sec; 
est completion in 7 hours 7 minutes [INSERT: Count=162549, Max=155775, Min=409, 
Avg=3069.92, 90=5731, 99=13287, 99.9=79935, 99.99=153087]
2016-03-30 13:36:36:261 1200 sec: 19172700 operations; 13432.3 current ops/sec; 
est completion in 7 hours 8 minutes [INSERT: Count=134331, Max=452863, Min=395, 
Avg=3716.84, 90=5907, 99=19503, 99.9=224767, 99.99=447999]
2016-03-30 13:36:46:261 1210 sec: 19335308 operations; 16260.8 current ops/sec; 
est completion in 7 hours 7 minutes [INSERT: 

[jira] [Updated] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-03-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15324:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to 1.3+

Thanks for the patch [~carp84] The timed out tests I've added timeouts to the 
two that were missing them or were misconfigured. They have failed in past. 
Will try and work on them in another issue.

> Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy 
> and trigger unexpected split
> --
>
> Key: HBASE-15324
> URL: https://issues.apache.org/jira/browse/HBASE-15324
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.3
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, 
> HBASE-15324_v3.patch, HBASE-15324_v3.patch
>
>
> We introduce jitter for region split decision in HBASE-13412, but the 
> following line in {{ConstantSizeRegionSplitPolicy}} may cause long value 
> overflow if MAX_FILESIZE is specified to Long.MAX_VALUE:
> {code}
> this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 
> 0.5D) * jitter);
> {code}
> In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target 
> region to split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218782#comment-15218782
 ] 

Ted Yu commented on HBASE-15411:


All comments addressed except for more robust failure handling which would be 
done in phase 2.

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v12.txt, 
> 15411-v13.txt, 15411-v14.txt, 15411-v15.txt, 15411-v16.txt, 15411-v18.txt, 
> 15411-v22.txt, 15411-v3.txt, 15411-v5.txt, 15411-v6.txt, 15411-v7.txt, 
> 15411-v9.txt, FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-15566) Add timeouts on TestMobFlushSnapshotFromClient and TestRegionMergeTransactionOnCluster

2016-03-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-15566.
---
   Resolution: Fixed
 Assignee: stack
Fix Version/s: 2.0.0

Pushed to master

> Add timeouts on TestMobFlushSnapshotFromClient and 
> TestRegionMergeTransactionOnCluster
> --
>
> Key: HBASE-15566
> URL: https://issues.apache.org/jira/browse/HBASE-15566
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: timeouts.patch
>
>
> Looking at recent timeouts, these two tests are missing timeout or the 
> timeouts are not fit for their category.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218777#comment-15218777
 ] 

Enis Soztutar commented on HBASE-15411:
---

I'll take a look. Did you address all the review comments in RB? 

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v12.txt, 
> 15411-v13.txt, 15411-v14.txt, 15411-v15.txt, 15411-v16.txt, 15411-v18.txt, 
> 15411-v22.txt, 15411-v3.txt, 15411-v5.txt, 15411-v6.txt, 15411-v7.txt, 
> 15411-v9.txt, FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15566) Add timeouts on TestMobFlushSnapshotFromClient and TestRegionMergeTransactionOnCluster

2016-03-30 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15566:
--
Attachment: timeouts.patch

Let me push this.

> Add timeouts on TestMobFlushSnapshotFromClient and 
> TestRegionMergeTransactionOnCluster
> --
>
> Key: HBASE-15566
> URL: https://issues.apache.org/jira/browse/HBASE-15566
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
> Attachments: timeouts.patch
>
>
> Looking at recent timeouts, these two tests are missing timeout or the 
> timeouts are not fit for their category.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15566) Add timeouts on TestMobFlushSnapshotFromClient and TestRegionMergeTransactionOnCluster

2016-03-30 Thread stack (JIRA)
stack created HBASE-15566:
-

 Summary: Add timeouts on TestMobFlushSnapshotFromClient and 
TestRegionMergeTransactionOnCluster
 Key: HBASE-15566
 URL: https://issues.apache.org/jira/browse/HBASE-15566
 Project: HBase
  Issue Type: Bug
Reporter: stack


Looking at recent timeouts, these two tests are missing timeout or the timeouts 
are not fit for their category.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14983:
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.0)
   1.4.0
   Status: Resolved  (was: Patch Available)

> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-14983-branch-1.patch, HBASE-14983-v1.patch, 
> HBASE-14983-v10.patch, HBASE-14983-v2.patch, HBASE-14983-v3.patch, 
> HBASE-14983-v4.patch, HBASE-14983-v5.patch, HBASE-14983-v6.patch, 
> HBASE-14983-v7.patch, HBASE-14983-v8.patch, HBASE-14983-v9.patch, 
> HBASE-14983.patch, Screen Shot 2015-12-15 at 3.33.09 PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15411) Rewrite backup with Procedure V2

2016-03-30 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218716#comment-15218716
 ] 

Ted Yu commented on HBASE-15411:


Planning to check into HBASE-7921 branch tomorrow morning if there is no 
further review comment.

> Rewrite backup with Procedure V2
> 
>
> Key: HBASE-15411
> URL: https://issues.apache.org/jira/browse/HBASE-15411
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15411-v1.txt, 15411-v11.txt, 15411-v12.txt, 
> 15411-v13.txt, 15411-v14.txt, 15411-v15.txt, 15411-v16.txt, 15411-v18.txt, 
> 15411-v22.txt, 15411-v3.txt, 15411-v5.txt, 15411-v6.txt, 15411-v7.txt, 
> 15411-v9.txt, FullTableBackupProcedure.java
>
>
> Currently full / incremental backup is driven by BackupHandler (see call() 
> method for flow).
> This issue is to rewrite the flow using Procedure V2.
> States (enum) for full / incremental backup would be introduced in 
> Backup.proto which correspond to the steps performed in BackupHandler#call().
> executeFromState() would pace the backup based on the current state.
> serializeStateData() / deserializeStateData() would be used to persist state 
> into procedure WAL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15265) Implement an asynchronous FSHLog

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218712#comment-15218712
 ] 

stack commented on HBASE-15265:
---

There I go asking for same stuff again... Thanks for filing HBASE-15536. I've 
been filling it in. Reviewed the prerequisite issues. Will finish perf test. 

> Implement an asynchronous FSHLog
> 
>
> Key: HBASE-15265
> URL: https://issues.apache.org/jira/browse/HBASE-15265
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15265-v1.patch, HBASE-15265-v2.patch, 
> HBASE-15265-v3.patch, HBASE-15265-v4.patch, HBASE-15265-v5.patch, 
> HBASE-15265-v6.patch, HBASE-15265-v7.patch, HBASE-15265-v8.patch, 
> HBASE-15265-v8.patch, HBASE-15265-v8.patch, HBASE-15265-v8.patch, 
> HBASE-15265-v8.patch, HBASE-15265-v8.patch, HBASE-15265-v8.patch, 
> HBASE-15265-v8.patch, HBASE-15265.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15565) Rewrite restore with Procedure V2

2016-03-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15565:
---
Attachment: 15565-v1.txt

Patch v1 is based on Vlad's work in HBASE-14123

> Rewrite restore with Procedure V2
> -
>
> Key: HBASE-15565
> URL: https://issues.apache.org/jira/browse/HBASE-15565
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15565-v1.txt
>
>
> Currently restore is driven by RestoreClientImpl#restore().
> This issue rewrites the flow using Procedure V2.
> RestoreTablesProcedure would replace RestoreClientImpl.
> Main logic would be driven by executeFromState() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15327) Canary will always invoke admin.balancer() in each sniffing period when writeSniffing is enabled

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218709#comment-15218709
 ] 

Hudson commented on HBASE-15327:


FAILURE: Integrated in HBase-Trunk_matrix #815 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/815/])
HBASE-15327 Canary will always invoke admin.balancer() in each sniffing (tedyu: 
rev 31aee19f28e56070e128c1bada2d87b53e1fd656)
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java


> Canary will always invoke admin.balancer() in each sniffing period when 
> writeSniffing is enabled
> 
>
> Key: HBASE-15327
> URL: https://issues.apache.org/jira/browse/HBASE-15327
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15327-branch-1-v1.patch, HBASE-15327-trunk.patch, 
> HBASE-15327-trunk.patch, HBASE-15327-v1.patch
>
>
> When Canary#writeSniffing is enabled, Canary#checkWriteTableDistribution will 
> make sure the regions of write table distributed on all region servers as:
> {code}
>   int numberOfServers = admin.getClusterStatus().getServers().size();
>   ..
>   int numberOfCoveredServers = serverSet.size();
>   if (numberOfCoveredServers < numberOfServers) {
> admin.balancer();
>   }
> {code}
> The master will also work as a regionserver, so that ClusterStatus#getServers 
> will contain the master. On the other hand, write table of Canary will not be 
> assigned to master, making numberOfCoveredServers always smaller than 
> numberOfServers and admin.balancer always be invoked in each sniffing period. 
> This may cause frequent region moves. A simple fix is excluding master from 
> numberOfServers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15565) Rewrite restore with Procedure V2

2016-03-30 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15565:
--

 Summary: Rewrite restore with Procedure V2
 Key: HBASE-15565
 URL: https://issues.apache.org/jira/browse/HBASE-15565
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu


Currently restore is driven by RestoreClientImpl#restore().

This issue rewrites the flow using Procedure V2.

RestoreTablesProcedure would replace RestoreClientImpl.
Main logic would be driven by executeFromState() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218703#comment-15218703
 ] 

stack commented on HBASE-15536:
---

I reviewed HBASE-15407 and HBASE-15538. They seem good to go. Before we can cut 
over, we should do a deploy that is secure to ensure all basically works 
(though the tests bundled are good).

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-03-30 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218637#comment-15218637
 ] 

Elliott Clark commented on HBASE-15324:
---

+1

> Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy 
> and trigger unexpected split
> --
>
> Key: HBASE-15324
> URL: https://issues.apache.org/jira/browse/HBASE-15324
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.1.3
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, 
> HBASE-15324_v3.patch, HBASE-15324_v3.patch
>
>
> We introduce jitter for region split decision in HBASE-13412, but the 
> following line in {{ConstantSizeRegionSplitPolicy}} may cause long value 
> overflow if MAX_FILESIZE is specified to Long.MAX_VALUE:
> {code}
> this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - 
> 0.5D) * jitter);
> {code}
> In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target 
> region to split.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218614#comment-15218614
 ] 

Hadoop QA commented on HBASE-15324:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
52s {color} | {color:green} hbase-server: patch generated 0 new + 0 unchanged - 
2 fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
38m 35s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 158m 49s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 233m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
|   | org.apache.hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster |
|   | org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796089/HBASE-15324_v3.patch |
| JIRA Issue | HBASE-15324 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-15525) OutOfMemory could occur when using BoundedByteBufferPool during RPC bursts

2016-03-30 Thread deepankar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218608#comment-15218608
 ] 

deepankar commented on HBASE-15525:
---

Sure happy to help

> OutOfMemory could occur when using BoundedByteBufferPool during RPC bursts
> --
>
> Key: HBASE-15525
> URL: https://issues.apache.org/jira/browse/HBASE-15525
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: deepankar
>Assignee: Anoop Sam John
>Priority: Critical
> Attachments: WIP.patch
>
>
> After HBASE-13819 the system some times run out of direct memory whenever 
> there is some network congestion or some client side issues.
> This was because of pending RPCs in the RPCServer$Connection.responseQueue 
> and since all the responses in this queue hold a buffer for cellblock from 
> BoundedByteBufferPool this could takeup a lot of memory if the 
> BoundedByteBufferPool's moving average settles down towards a higher value 
> See the discussion here 
> [HBASE-13819-comment|https://issues.apache.org/jira/browse/HBASE-13819?focusedCommentId=15207822=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15207822]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15437) Response size calculated in RPCServer for warning tooLarge responses does NOT count CellScanner payload

2016-03-30 Thread deepankar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218605#comment-15218605
 ] 

deepankar commented on HBASE-15437:
---

This means that responseTime warning will not contain responseSize and the 
responseSize will not contain information about responseTimes right ?

> Response size calculated in RPCServer for warning tooLarge responses does NOT 
> count CellScanner payload
> ---
>
> Key: HBASE-15437
> URL: https://issues.apache.org/jira/browse/HBASE-15437
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC
>Reporter: deepankar
>
> After HBASE-13158 where we respond back to RPCs with cells in the payload , 
> the protobuf response will just have the count the cells to read from 
> payload, but there are set of features where we log warn in RPCServer 
> whenever the response is tooLarge, but this size now is not considering the 
> sizes of the cells in the PayloadCellScanner. Code form RPCServer
> {code}
>   long responseSize = result.getSerializedSize();
>   // log any RPC responses that are slower than the configured warn
>   // response time or larger than configured warning size
>   boolean tooSlow = (processingTime > warnResponseTime && 
> warnResponseTime > -1);
>   boolean tooLarge = (responseSize > warnResponseSize && warnResponseSize 
> > -1);
>   if (tooSlow || tooLarge) {
> // when tagging, we let TooLarge trump TooSmall to keep output simple
> // note that large responses will often also be slow.
> logResponse(new Object[]{param},
> md.getName(), md.getName() + "(" + param.getClass().getName() + 
> ")",
> (tooLarge ? "TooLarge" : "TooSlow"),
> status.getClient(), startTime, processingTime, qTime,
> responseSize);
>   }
> {code}
> Should this feature be not supported any more or should we add a method to 
> CellScanner or a new interface which returns the serialized size (but this 
> might not include the compression codecs which might be used during response 
> ?) Any other Idea this could be fixed ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14983:
--
Attachment: HBASE-14983-branch-1.patch

Branch-1 patch.

> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14983-branch-1.patch, HBASE-14983-v1.patch, 
> HBASE-14983-v10.patch, HBASE-14983-v2.patch, HBASE-14983-v3.patch, 
> HBASE-14983-v4.patch, HBASE-14983-v5.patch, HBASE-14983-v6.patch, 
> HBASE-14983-v7.patch, HBASE-14983-v8.patch, HBASE-14983-v9.patch, 
> HBASE-14983.patch, Screen Shot 2015-12-15 at 3.33.09 PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15327) Canary will always invoke admin.balancer() in each sniffing period when writeSniffing is enabled

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218598#comment-15218598
 ] 

Hudson commented on HBASE-15327:


SUCCESS: Integrated in HBase-1.3-IT #589 (See 
[https://builds.apache.org/job/HBase-1.3-IT/589/])
HBASE-15327 Canary will always invoke admin.balancer() in each sniffing (tedyu: 
rev e339bec3f15707fc4a2d464befeac485c08ad21d)
* hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java


> Canary will always invoke admin.balancer() in each sniffing period when 
> writeSniffing is enabled
> 
>
> Key: HBASE-15327
> URL: https://issues.apache.org/jira/browse/HBASE-15327
> Project: HBase
>  Issue Type: Bug
>  Components: canary
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15327-branch-1-v1.patch, HBASE-15327-trunk.patch, 
> HBASE-15327-trunk.patch, HBASE-15327-v1.patch
>
>
> When Canary#writeSniffing is enabled, Canary#checkWriteTableDistribution will 
> make sure the regions of write table distributed on all region servers as:
> {code}
>   int numberOfServers = admin.getClusterStatus().getServers().size();
>   ..
>   int numberOfCoveredServers = serverSet.size();
>   if (numberOfCoveredServers < numberOfServers) {
> admin.balancer();
>   }
> {code}
> The master will also work as a regionserver, so that ClusterStatus#getServers 
> will contain the master. On the other hand, write table of Canary will not be 
> assigned to master, making numberOfCoveredServers always smaller than 
> numberOfServers and admin.balancer always be invoked in each sniffing period. 
> This may cause frequent region moves. A simple fix is excluding master from 
> numberOfServers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15559) BaseMasterAndRegionObserver doesn't implement all the methods

2016-03-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218596#comment-15218596
 ] 

Hudson commented on HBASE-15559:


SUCCESS: Integrated in HBase-1.3-IT #589 (See 
[https://builds.apache.org/job/HBase-1.3-IT/589/])
HBASE-15559 Fix  BaseMasterAndRegionObserver doesn't implement all the (eclark: 
rev fee0212da02ee8016f81983619733401b7097336)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java


> BaseMasterAndRegionObserver doesn't implement all the methods
> -
>
> Key: HBASE-15559
> URL: https://issues.apache.org/jira/browse/HBASE-15559
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15559-v1.patch, HBASE-15559-v2.patch, 
> HBASE-15559.patch
>
>
> It's supposed to be a class that allows someone to derive from that class and 
> only need to implement the desired methods. However two methods aren't 
> implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15554) StoreFile$Writer.appendGeneralBloomFilter generates extra KV

2016-03-30 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-15554:
--
Description: Accounts for 10% memory allocation in compaction thread when 
BloomFilterType is ROWCOL.  (was: Accounts for 10% memory allocation when 
BloomFilterType is ROWCOL.)

> StoreFile$Writer.appendGeneralBloomFilter generates extra KV
> 
>
> Key: HBASE-15554
> URL: https://issues.apache.org/jira/browse/HBASE-15554
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Reporter: Vladimir Rodionov
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
>
> Accounts for 10% memory allocation in compaction thread when BloomFilterType 
> is ROWCOL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13639) SyncTable - rsync for HBase tables

2016-03-30 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218582#comment-15218582
 ] 

Enis Soztutar commented on HBASE-13639:
---

[~davelatham] this is a great explanation. Do you think that we can simplify 
the whole usage by the tool so that it does these multiple steps (run hashtable 
+ distcp + synctable) automatically? The tool can take all the arguments 
needed, and coordinate running these steps so that it is easier to use and 
harder to make mistakes. Shall we open a follow up issue?  

> SyncTable - rsync for HBase tables
> --
>
> Key: HBASE-13639
> URL: https://issues.apache.org/jira/browse/HBASE-13639
> Project: HBase
>  Issue Type: New Feature
>  Components: mapreduce, Operability, tooling
>Reporter: Dave Latham
>Assignee: Dave Latham
>  Labels: tooling
> Fix For: 2.0.0, 0.98.14, 1.2.0
>
> Attachments: HBASE-13639-0.98-addendum-hadoop-1.patch, 
> HBASE-13639-0.98.patch, HBASE-13639-v1.patch, HBASE-13639-v2.patch, 
> HBASE-13639-v3-0.98.patch, HBASE-13639-v3.patch, HBASE-13639.patch
>
>
> Given HBase tables in remote clusters with similar but not identical data, 
> efficiently update a target table such that the data in question is identical 
> to a source table.  Efficiency in this context means using far less network 
> traffic than would be required to ship all the data from one cluster to the 
> other.  Takes inspiration from rsync.
> Design doc: 
> https://docs.google.com/document/d/1-2c9kJEWNrXf5V4q_wBcoIXfdchN7Pxvxv1IO6PW0-U/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15506) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-03-30 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218586#comment-15218586
 ] 

Vladimir Rodionov commented on HBASE-15506:
---

[~lhofhansl]

Reusing objects (memory) is standard approach in high-performance servers.
http://natsys-lab.blogspot.com/2015/09/fast-memory-pool-allocators-boost-nginx.html
 

> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HBASE-15506
> URL: https://issues.apache.org/jira/browse/HBASE-15506
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>
> Deep inside stack trace in DFSOutputStream.createPacket.
> This should be opened in HDFS. This JIRA is to track HDFS work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15564) HashTable job supposedly succeeded but manifest.tmp remain

2016-03-30 Thread My Ho (JIRA)
My Ho created HBASE-15564:
-

 Summary: HashTable job supposedly succeeded but manifest.tmp remain
 Key: HBASE-15564
 URL: https://issues.apache.org/jira/browse/HBASE-15564
 Project: HBase
  Issue Type: Bug
  Components: mapreduce, Replication
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: My Ho


I'm using org.apache.hadoop.hbase.mapreduce.HashTable to create hashses for use 
in SyncTable.  Occasionally, the job page in jobhistory will say the job 
succeeded, but in my filesystem, I see "manifest.tmp" instead of the expected 
"manifest".  According to the code[1], the job must have failed, but I don't 
see failure anywhere.  

[1]https://github.com/apache/hbase/blob/ad3feaa44800f10d102255a240c38ccf23a82d49/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HashTable.java#L739-L741



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15559) BaseMasterAndRegionObserver doesn't implement all the methods

2016-03-30 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15559:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> BaseMasterAndRegionObserver doesn't implement all the methods
> -
>
> Key: HBASE-15559
> URL: https://issues.apache.org/jira/browse/HBASE-15559
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15559-v1.patch, HBASE-15559-v2.patch, 
> HBASE-15559.patch
>
>
> It's supposed to be a class that allows someone to derive from that class and 
> only need to implement the desired methods. However two methods aren't 
> implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15499) Add multiple data type support for increment

2016-03-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218561#comment-15218561
 ] 

Hadoop QA commented on HBASE-15499:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.2.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 51s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 21s 
{color} | {color:red} hbase-common: patch generated 60 new + 0 unchanged - 0 
fixed = 60 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 14s 
{color} | {color:red} hbase-common: patch generated 60 new + 0 unchanged - 0 
fixed = 60 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 13s 
{color} | {color:red} hbase-client: patch generated 60 new + 0 unchanged - 0 
fixed = 60 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 13s 
{color} | {color:red} hbase-client: patch generated 60 new + 0 unchanged - 0 
fixed = 60 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 15s 
{color} | {color:red} hbase-server: patch generated 60 new + 0 unchanged - 0 
fixed = 60 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 12s 
{color} | {color:red} hbase-server: patch generated 60 new + 0 unchanged - 0 
fixed = 60 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 28s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 
1s {color} | {color:green} the patch passed 

[jira] [Commented] (HBASE-15536) Make AsyncFSWAL as our default WAL

2016-03-30 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15218558#comment-15218558
 ] 

stack commented on HBASE-15536:
---

An overnight run passed (10B ITBLL with chaos monkeys on 8-node cluster) so 
this provider is probably as good/bad as the one we currently have (10 hours). 
An earlier test of two hours also passed. Let me run a perf compare next.

> Make AsyncFSWAL as our default WAL
> --
>
> Key: HBASE-15536
> URL: https://issues.apache.org/jira/browse/HBASE-15536
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>
> As it should be predicated on passing basic cluster ITBLL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >