[jira] [Commented] (HBASE-16689) Durability == ASYNC_WAL means no SYNC

2016-09-22 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515511#comment-15515511
 ] 

Yu Li commented on HBASE-16689:
---

+1, we also found this during benchmark testing weeks ago but didn't find a 
good way to fix it, and since there will always be some SYNC call online, we 
mark this as relatively low priority on our side. Already got some idea about 
how to fix this sir? [~stack]

btw, we also found that because currently we are using one disruptor per WAL 
and the disruptor is consumed in sequential, we've seen more contention among 
different regions under high write pressure. I got a patch at hand and will 
open another JIRA to tell more details soon, JFYI.

> Durability == ASYNC_WAL means no SYNC
> -
>
> Key: HBASE-16689
> URL: https://issues.apache.org/jira/browse/HBASE-16689
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.3, 1.1.6, 1.2.3
>Reporter: stack
>Assignee: stack
>Priority: Critical
>
> Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table 
> Table appends. If all tables on a cluster have this setting, data is flushed 
> from the RS to the DN at some arbitrary time and a bunch may just hang out in 
> DFSClient buffers on the RS-side indefinitely if writes are sporadic, at 
> least until there is a WAL roll -- a log roll sends a sync through the write 
> pipeline to flush out any outstanding appends -- or a region close which does 
> similar or we crash and drop the data in buffers RS.
> This is probably not what a user expects when they set ASYNC_WAL (We don't 
> doc anywhere that I could find clearly what ASYNC_WAL means). Worse, old-time 
> users probably associate ASYNC_WAL and DEFERRED_FLUSH, an old 
> HTableDescriptor config that was deprecated and replaced by ASYNC_WAL. 
> DEFERRED_FLUSH ran a background thread -- LogSyncer -- that on a configurable 
> interval, sent a sync down the write pipeline so any outstanding appends 
> since last last interval start get pushed out to the DN.  ASYNC_WAL doesn't 
> do this (see below for history on how we let go of the LogSyncer feature).
> Of note, we always sync meta edits. You can't turn this off. Also, given WALs 
> are per regionserver, if other regions on the RS are from tables that have 
> sync set, these writes will push out to the DN any appends done on tables 
> that have DEFERRED/ASYNC_WAL set.
> To fix, we could do a few things:
>  * Simple and comprehensive would be always queuing a sync, even if ASYNC_WAL 
> is set but we let go of Handlers as soon as we write the memstore -- we don't 
> wait on the sync to complete as we do with the default setting of 
> Durability=SYNC_WAL.
>  * Be like a 'real' database and add in a sync after N bytes of data have 
> been appended (configurable) or after M milliseconds have passed, which ever 
> threshold happens first. The size check would be easy. The sync-ever-M-millis 
> would mean another thread.
>  * Doc what ASYNC_WAL means (and other durability options)
> Let me take a look and report back. Will file a bit of history on how we got 
> here in next comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-22 Thread ramkrishna vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515508#comment-15515508
 ] 

ramkrishna vasudevan commented on HBASE-16643:
--

Hi
Are you telling about some new issue @sunyu?

Regards
Ram




> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515463#comment-15515463
 ] 

Hadoop QA commented on HBASE-16678:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
36m 7s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 127m 7s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 179m 40s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRegionServerMetrics |
| Timed out junit tests | org.apache.hadoop.hbase.client.TestReplicasClient |
|   | org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | org.apache.hadoop.hbase.client.TestBlockEvictionFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.0 Server=1.12.0 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829973/hbase-16678_v1.patch |
| JIRA Issue | HBASE-16678 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux eb8476a875b3 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 07ed155 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3677/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/3677/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3677/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3677/console |
| 

回复:[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-22 Thread sunyu1949
  String msg = "Results sent from server=" + noOfResults + ". But 
only got " + i+ " results completely at client. Resetting the 
scanner to scan again.";  LOG.error(msg);  throw new 
DoNotRetryIOException(msg);}  } catch (IOException ioe) {   
 // We are getting IOE while retrieving the cells for Results.  
  // We have to scan for the same results again. Throwing DNRIOE as a client 
retry on the// same scanner will result in 
OutOfOrderScannerNextExceptionLOG.error("Exception while reading 
cells from result."  + "Resetting the scanner to scan again.", 
ioe);throw new DoNotRetryIOException("Resetting the scanner.", 
ioe);  }  cells.add(cellScanner.current());  
System.out.println("ResponseConverter.getResults(1)--cells == "+ cells + "--j 
== " + j + "---noOfCells == " + noOfCells);}results[i] = 
Result.create(cells, null, response.getStale(), isPartial);
System.out.println("ResponseConverter.getResults(2)--cells == "+ cells + 
"--results[i] == " + results[i] + "---noOfCells == " + noOfCells);  } else 
{// Result is pure pb.results[i] = 
ProtobufUtil.toResult(response.getResults(i));
System.out.println("ResponseConverter.getResults(3)--results == " + results[i] 
+ "--i == " + i);  }}return results; thank you very much, anoop, do 
you want to see this code, the exception should occur on the red mark line.
- 原始邮件 -
发件人:"ramkrishna.s.vasudevan (JIRA)" 
收件人:issues@hbase.apache.org
主题:[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow 
MSLAB closure due to improper ref counting of segments
日期:2016年09月23日 13点25分


 [ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]
ramkrishna.s.vasudevan updated HBASE-16643:
---
Status: Open  (was: Patch Available)
> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-22 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16643:
---
Status: Open  (was: Patch Available)

> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-22 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16643:
---
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-22 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16643:
---
Attachment: HBASE-16643_5.patch

Updated patch based on RB comments. I have left some replies in the RB for all 
the comments and questions. The SegmentScanner change is needed is what I 
believe. If that is not needed in this JIRA I can remove it and raise another 
JIRA for discussion.

> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-22 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16643:
---
Status: Open  (was: Patch Available)

> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16680) Reduce garbage in BufferChain

2016-09-22 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515426#comment-15515426
 ] 

Ashish Singhi commented on HBASE-16680:
---

My bad :(
Thanks for noticing and correcting it.

> Reduce garbage in BufferChain
> -
>
> Key: HBASE-16680
> URL: https://issues.apache.org/jira/browse/HBASE-16680
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16680-master.patch, HBASE-16680-master_v2.patch
>
>
> BufferChain accept a List and then convert it to ByteBuffer[], we 
> can directly produce ByteBuffer[] and handle it to BufferChain, so eliminate 
> the object List.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16689) Durability == ASYNC_WAL means no SYNC

2016-09-22 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16689:
--
Description: 
Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table Table 
appends. If all tables on a cluster have this setting, data is flushed from the 
RS to the DN at some arbitrary time and a bunch may just hang out in DFSClient 
buffers on the RS-side indefinitely if writes are sporadic, at least until 
there is a WAL roll -- a log roll sends a sync through the write pipeline to 
flush out any outstanding appends -- or a region close which does similar 
or we crash and drop the data in buffers RS.

This is probably not what a user expects when they set ASYNC_WAL (We don't doc 
anywhere that I could find clearly what ASYNC_WAL means). Worse, old-time users 
probably associate ASYNC_WAL and DEFERRED_FLUSH, an old HTableDescriptor config 
that was deprecated and replaced by ASYNC_WAL. DEFERRED_FLUSH ran a background 
thread -- LogSyncer -- that on a configurable interval, sent a sync down the 
write pipeline so any outstanding appends since last last interval start get 
pushed out to the DN.  ASYNC_WAL doesn't do this (see below for history on how 
we let go of the LogSyncer feature).

Of note, we always sync meta edits. You can't turn this off. Also, given WALs 
are per regionserver, if other regions on the RS are from tables that have sync 
set, these writes will push out to the DN any appends done on tables that have 
DEFERRED/ASYNC_WAL set.

To fix, we could do a few things:

 * Simple and comprehensive would be always queuing a sync, even if ASYNC_WAL 
is set but we let go of Handlers as soon as we write the memstore -- we don't 
wait on the sync to complete as we do with the default setting of 
Durability=SYNC_WAL.
 * Be like a 'real' database and add in a sync after N bytes of data have been 
appended (configurable) or after M milliseconds have passed, which ever 
threshold happens first. The size check would be easy. The sync-ever-M-millis 
would mean another thread.
 * Doc what ASYNC_WAL means (and other durability options)

Let me take a look and report back. Will file a bit of history on how we got 
here in next comment.

  was:
Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table Table 
appends. If all tables on a cluster have this setting, data is flushed from the 
RS to the DN at some arbitrary time and a bunch may just hang out in DFSClient 
buffers on the RS-side indefinitely if writes are sporadic, at least until 
there is a WAL roll -- a log roll sends a sync through the write pipeline to 
flush out any outstanding appends -- or a region close which does similar 
or we crash and drop the data in buffers RS.

This is probably not what a user expects when they set ASYNC_WAL (We don't doc 
anywhere that I could find clearly what ASYNC_WAL means). Worse, old-time users 
probably associate ASYNC_WAL and DEFERRED_FLUSH, an old HTableDescriptor config 
that was deprecated and replaced by ASYNC_WAL. DEFERRED_FLUSH ran a background 
thread -- LogSyncer -- that on a configurable interval, sent a sync down the 
write pipeline so any outstanding appends since last last interval start get 
pushed out to the DN.  ASYNC_WAL doesn't do this (see below for history on how 
we let go of the LogSyncer feature).

Of note, we always sync meta edits. You can't turn this off. Also, given WALs 
are per regionserver, if other regions on the RS are from tables that have sync 
set, these writes will push out to the DN any appends done on tables that have 
DEFERRED/ASYNC_WAL set.

To fix, we could do a few things:

 * Simple and comprehensive would be always queuing a sync, even if ASYNC_WAL 
is set but we let go of Handlers as soon as we write the memstore -- we don't 
wait on the sync to complete as we do with the default setting of 
Durability=SYNC_WAL.
 * Be like a 'real' database and add in a sync after N bytes of data have been 
appended (configurable) or after M milliseconds have passed, which ever 
threshold happens first. The size check would be easy. The sync-ever-M-millis 
would mean another thread.

Let me take a look and report back. Will file a bit of history on how we got 
here in next comment.


> Durability == ASYNC_WAL means no SYNC
> -
>
> Key: HBASE-16689
> URL: https://issues.apache.org/jira/browse/HBASE-16689
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.3, 1.1.6, 1.2.3
>Reporter: stack
>Assignee: stack
>Priority: Critical
>
> Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table 
> Table appends. If all tables on a cluster have this setting, data is flushed 
> from the RS to the DN at some arbitrary time and a bunch may just hang out in 
> DFSClient buffers on the RS-side 

[jira] [Commented] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc

2016-09-22 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515386#comment-15515386
 ] 

Appy commented on HBASE-16682:
--

It bother me not knowing why such a thing is happening. I spent 45 min trying 
to get it working by playing with deps in hbase-testing-util, but i still don't 
understand why hbase-shell isn't getting it transitively.
So am fine with adding it to hbase-shell/pom for now to fix the broken tests.
This change is probably better here because it's linked with HBASE-14734, then 
in HBASE-1 which has to do with replication.
Please +1 the v1 patch (QA failure was because of unmodified tests).

> Fix Shell tests failure. NoClassDefFoundError for MiniKdc
> -
>
> Key: HBASE-16682
> URL: https://issues.apache.org/jira/browse/HBASE-16682
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-16682.master.001.patch, 
> HBASE-16682.master.002.patch
>
>
> Stacktrace
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975)
>   at org.jruby.javasupport.JavaClass.getMethods(JavaClass.java:2110)
>   at org.jruby.javasupport.JavaClass.setupClassMethods(JavaClass.java:955)
>   at org.jruby.javasupport.JavaClass.access$700(JavaClass.java:99)
>   at 
> org.jruby.javasupport.JavaClass$ClassInitializer.initialize(JavaClass.java:650)
>   at org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:689)
>   at org.jruby.javasupport.Java.createProxyClass(Java.java:526)
>   at org.jruby.javasupport.Java.getProxyClass(Java.java:455)
>   at org.jruby.javasupport.Java.getInstance(Java.java:364)
>   at 
> org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:166)
>   at 
> org.jruby.javasupport.JavaEmbedUtils.javaToRuby(JavaEmbedUtils.java:291)
>   at 
> org.jruby.embed.variable.AbstractVariable.updateByJavaObject(AbstractVariable.java:81)
>   at 
> org.jruby.embed.variable.GlobalVariable.(GlobalVariable.java:69)
>   at 
> org.jruby.embed.variable.GlobalVariable.getInstance(GlobalVariable.java:60)
>   at 
> org.jruby.embed.variable.VariableInterceptor.getVariableInstance(VariableInterceptor.java:97)
>   at org.jruby.embed.internal.BiVariableMap.put(BiVariableMap.java:321)
>   at org.jruby.embed.ScriptingContainer.put(ScriptingContainer.java:1123)
>   at 
> org.apache.hadoop.hbase.client.AbstractTestShell.setUpBeforeClass(AbstractTestShell.java:61)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:78)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:54)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> 

[jira] [Commented] (HBASE-16689) Durability == ASYNC_WAL means no SYNC

2016-09-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515385#comment-15515385
 ] 

stack commented on HBASE-16689:
---

h1. History

We committed the below in time for 0.98.0:

Author: Michael Stack 
Date:   Fri Dec 13 17:32:09 2013 +

HBASE-8755 A new write thread model for HLog to improve the overall HBase 
write throughput

git-svn-id: https://svn.apache.org/repos/asf/hbase/trunk@1550778 
13f79535-47bb-0310-9956-ffa450edef68

It removed the LogSyncer thread because it didn't fit the new model. From 
comments in above issue:

+   * 6). No LogSyncer thread any more (since there is always 
AsyncWriter/AsyncFlusher threads
+   * do the same job it does)

And from reviews:

{code}
[stack] How does deferred log flush still work when you remove stuff like 
optionalFlushInterval? You say '...don't pend on HLog.syncer() waiting for its 
txid to be sync-ed' but that is another behavior than what we had here 
previously.
===> When say 'still support deferred log flush' I mean for 'deferred log 
flush' it can still response write success to client without wait/pend on 
syncer(txid),
in this sense, the AsyncWriter/AsyncSyncer do what the previous LogSyncer does 
from the point view of the write handler threads: clients don't wait for the 
write persist before get reponse success.
{code}

The above got further clarification over in  HBASE-10324:

{code}
"By the new write thread model introduced by HBASE-8755, some 
deferred-log-flush/Durability API/code/names should be change accordingly:
1. no timer-triggered deferred-log-flush since flush is always done by async 
threads, so configuration 'hbase.regionserver.optionallogflushinterval' is no 
longer needed
2. the async writer-syncer-notifier threads will always be triggered 
implicitly, this semantic is that it always holds that 
'hbase.regionserver.optionallogflushinterval' > 0, so deferredLogSyncDisabled 
in HRegion.java which affects durability behavior should always be false
3. what HTableDescriptor.isDeferredLogFlush really means is the write can 
return without waiting for the sync is done, so the interface name should be 
changed to isAsyncLogFlush/setAsyncLogFlush to reflect their real meaning"
{code}

Reading the patch, we just always did sync. There was no support for deferred.

In 1.0.0, a new WAL refactor was brought in by HBASE-10156. It removed all 
vestiges of deferred. They weren't working anyways. But it also changed the 
model. It added support for durability with a variety of actions dependent on 
how durability is set. ASYNC_WAL became a pass through for sync.

> Durability == ASYNC_WAL means no SYNC
> -
>
> Key: HBASE-16689
> URL: https://issues.apache.org/jira/browse/HBASE-16689
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.0.3, 1.1.6, 1.2.3
>Reporter: stack
>Assignee: stack
>Priority: Critical
>
> Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table 
> Table appends. If all tables on a cluster have this setting, data is flushed 
> from the RS to the DN at some arbitrary time and a bunch may just hang out in 
> DFSClient buffers on the RS-side indefinitely if writes are sporadic, at 
> least until there is a WAL roll -- a log roll sends a sync through the write 
> pipeline to flush out any outstanding appends -- or a region close which does 
> similar or we crash and drop the data in buffers RS.
> This is probably not what a user expects when they set ASYNC_WAL (We don't 
> doc anywhere that I could find clearly what ASYNC_WAL means). Worse, old-time 
> users probably associate ASYNC_WAL and DEFERRED_FLUSH, an old 
> HTableDescriptor config that was deprecated and replaced by ASYNC_WAL. 
> DEFERRED_FLUSH ran a background thread -- LogSyncer -- that on a configurable 
> interval, sent a sync down the write pipeline so any outstanding appends 
> since last last interval start get pushed out to the DN.  ASYNC_WAL doesn't 
> do this (see below for history on how we let go of the LogSyncer feature).
> Of note, we always sync meta edits. You can't turn this off. Also, given WALs 
> are per regionserver, if other regions on the RS are from tables that have 
> sync set, these writes will push out to the DN any appends done on tables 
> that have DEFERRED/ASYNC_WAL set.
> To fix, we could do a few things:
>  * Simple and comprehensive would be always queuing a sync, even if ASYNC_WAL 
> is set but we let go of Handlers as soon as we write the memstore -- we don't 
> wait on the sync to complete as we do with the default setting of 
> Durability=SYNC_WAL.
>  * Be like a 'real' database and add in a sync after N bytes of data have 
> been appended (configurable) or after M milliseconds have passed, which ever 
> threshold happens 

[jira] [Commented] (HBASE-16671) Split TestExportSnapshot

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515363#comment-15515363
 ] 

Hudson commented on HBASE-16671:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1656 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1656/])
HBASE-16671 Split TestExportSnapshot (matteo.bertozzi: rev 
07ed15598bcc618df74f51c3c3950c543daea788)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotHelpers.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshotNoCluster.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestMobExportSnapshot.java


> Split TestExportSnapshot
> 
>
> Key: HBASE-16671
> URL: https://issues.apache.org/jira/browse/HBASE-16671
> Project: HBase
>  Issue Type: Test
>  Components: snapshots, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16671-v0.patch
>
>
> TestExportSnapshot contains 3 type of tests. 
>  - MiniCluster creating a table, taking a snapshot and running export
>  - Mocked snapshot running export
>  - tool helpers tests
> since now we have everything packed in a single test. 2 and 3 ended up having 
> a before and after that is creating a table and taking a snapshot which is 
> not used. Move those tests out and cut some time from the test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515364#comment-15515364
 ] 

Hudson commented on HBASE-16662:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1656 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1656/])
HBASE-16662 Fix open POODLE vulnerabilities (apurtell: rev 
4b05f40984f06ccd094b5680177ff760c88f81ea)
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIClientSocketFactorySecure.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/jetty/SslSelectChannelConnectorSecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIServerSocketFactorySecure.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java


> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16684) The get() requests does not see locally buffered put() requests when autoflush is disabled

2016-09-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515361#comment-15515361
 ] 

ramkrishna.s.vasudevan commented on HBASE-16684:


Is this a bug or is it a valid case? So in case of puts that gets buffered and 
the client dies even then those puts are not considered to be successful (as 
autoflush is false). In that case also any get() won't return those results 
correct?

> The get() requests does not see locally buffered put() requests when 
> autoflush is disabled
> --
>
> Key: HBASE-16684
> URL: https://issues.apache.org/jira/browse/HBASE-16684
> Project: HBase
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Priority: Minor
>
> When autoflush is disabled the put() requests are buffered locally.
> Subsequent get() requests on the same host will always go to the network and 
> will not see the updates that are buffered locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16689) Durability == ASYNC_WAL means no SYNC

2016-09-22 Thread stack (JIRA)
stack created HBASE-16689:
-

 Summary: Durability == ASYNC_WAL means no SYNC
 Key: HBASE-16689
 URL: https://issues.apache.org/jira/browse/HBASE-16689
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 1.2.3, 1.1.6, 1.0.3
Reporter: stack
Assignee: stack
Priority: Critical


Setting DURABILITY=ASYNC_WAL on a Table suspends all syncs for all table Table 
appends. If all tables on a cluster have this setting, data is flushed from the 
RS to the DN at some arbitrary time and a bunch may just hang out in DFSClient 
buffers on the RS-side indefinitely if writes are sporadic, at least until 
there is a WAL roll -- a log roll sends a sync through the write pipeline to 
flush out any outstanding appends -- or a region close which does similar 
or we crash and drop the data in buffers RS.

This is probably not what a user expects when they set ASYNC_WAL (We don't doc 
anywhere that I could find clearly what ASYNC_WAL means). Worse, old-time users 
probably associate ASYNC_WAL and DEFERRED_FLUSH, an old HTableDescriptor config 
that was deprecated and replaced by ASYNC_WAL. DEFERRED_FLUSH ran a background 
thread -- LogSyncer -- that on a configurable interval, sent a sync down the 
write pipeline so any outstanding appends since last last interval start get 
pushed out to the DN.  ASYNC_WAL doesn't do this (see below for history on how 
we let go of the LogSyncer feature).

Of note, we always sync meta edits. You can't turn this off. Also, given WALs 
are per regionserver, if other regions on the RS are from tables that have sync 
set, these writes will push out to the DN any appends done on tables that have 
DEFERRED/ASYNC_WAL set.

To fix, we could do a few things:

 * Simple and comprehensive would be always queuing a sync, even if ASYNC_WAL 
is set but we let go of Handlers as soon as we write the memstore -- we don't 
wait on the sync to complete as we do with the default setting of 
Durability=SYNC_WAL.
 * Be like a 'real' database and add in a sync after N bytes of data have been 
appended (configurable) or after M milliseconds have passed, which ever 
threshold happens first. The size check would be easy. The sync-ever-M-millis 
would mean another thread.

Let me take a look and report back. Will file a bit of history on how we got 
here in next comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515327#comment-15515327
 ] 

Hudson commented on HBASE-16662:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK7 #19 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/19/])
HBASE-16662 Fix open POODLE vulnerabilities (apurtell: rev 
73d4edbfa122c3d6c592a43171bacb61a7c69ca8)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIClientSocketFactorySecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIServerSocketFactorySecure.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/jetty/SslSelectChannelConnectorSecure.java


> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16423) Add re-compare option to VerifyReplication to avoid occasional inconsistent rows

2016-09-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16423:
---
 Assignee: Jianwei Cui
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   2.0.0

Test failures were not related to patch.

> Add re-compare option to VerifyReplication to avoid occasional inconsistent 
> rows
> 
>
> Key: HBASE-16423
> URL: https://issues.apache.org/jira/browse/HBASE-16423
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Assignee: Jianwei Cui
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16423-v1.patch, HBASE-16423-v2.patch
>
>
> Because replication keeps eventually consistency, VerifyReplication may 
> report inconsistent rows if there are data being written to source or peer 
> clusters during scanning. These occasionally inconsistent rows will have the 
> same data if we do the comparison again after a short period. It is not easy 
> to find the really inconsistent rows if VerifyReplication report a large 
> number of such occasionally inconsistency. To avoid this case, we can add an 
> option to make VerifyReplication read out the inconsistent rows again after 
> sleeping a few seconds and re-compare the rows during scanning. This behavior 
> follows the eventually consistency of hbase's replication. Suggestions and 
> discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515306#comment-15515306
 ] 

Hudson commented on HBASE-16604:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #21 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/21/])
HBASE-16604 Scanner retries on IOException can cause the scans to miss (enis: 
rev d600e8b70e10281ec19e3316ca0fd461d824a018)
* (edit) 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* (add) 
hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ScannerResetException.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServer.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatTestBase.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DelegatingKeyValueScanner.java


> Scanner retries on IOException can cause the scans to miss data 
> 
>
> Key: HBASE-16604
> URL: https://issues.apache.org/jira/browse/HBASE-16604
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: hbase-16604_v1.patch, hbase-16604_v2.patch, 
> hbase-16604_v3.branch-1.patch, hbase-16604_v3.patch
>
>
> Debugging an ITBLL failure, where the Verify did not "see" all the data in 
> the cluster, I've noticed that if we end up getting a generic IOException 
> from the HFileReader level, we may end up missing the rest of the data in the 
> region. I was able to manually test this, and this stack trace helps to 
> understand what is going on: 
> {code}
> 2016-09-09 16:27:15,633 INFO  [hconnection-0x71ad3d8a-shared--pool21-t9] 
> client.ScannerCallable(376): Open scanner=1 for 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":2097152,"families":{"testFamily":["testFamily"]},"caching":100,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  on region 
> region=testScanThrowsException,,1473463632707.b2adfb618e5d0fe225c1dc40c0eabfee.,
>  hostname=hw10676,51833,1473463626529, seqNum=2
> 2016-09-09 16:27:15,634 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2196): scan request:scanner_id: 1 number_of_rows: 
> 100 close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true renew: false
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2510): Rolling back next call seqId
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2565): Throwing new 
> ServiceExceptionjava.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c,
>  compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, 
> currentSize=1567264, freeSize=1525578848, maxSize=1527146112, 
> heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, 
> multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, 
> lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, 
> avgValueLen=3, entries=17576, length=866998, 
> 

[jira] [Commented] (HBASE-15921) Add first AsyncTable impl and create TableImpl based on it

2016-09-22 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515277#comment-15515277
 ] 

Duo Zhang commented on HBASE-15921:
---

And I think the refactoring work on zk could be done in another issue, and also 
we should fix the findbugs warnings. We should not write to a static field in a 
non-static method.

> Add first AsyncTable impl and create TableImpl based on it
> --
>
> Key: HBASE-15921
> URL: https://issues.apache.org/jira/browse/HBASE-15921
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Jurriaan Mous
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15921-v2.patch, HBASE-15921.demo.patch, 
> HBASE-15921.patch, HBASE-15921.v1.patch
>
>
> First we create an AsyncTable interface with implementation without the Scan 
> functionality. Those will land in a separate patch since they need a refactor 
> of existing scans.
> Also added is a new TableImpl to replace HTable. It uses the AsyncTableImpl 
> internally and should be a bit faster because it does jump through less hoops 
> to do ProtoBuf transportation. This way we can run all existing tests on the 
> AsyncTableImpl to guarantee its quality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15921) Add first AsyncTable impl and create TableImpl based on it

2016-09-22 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515258#comment-15515258
 ] 

Duo Zhang commented on HBASE-15921:
---

Let's talk about the retry and timeout logic.

In general, I think we should only do retry at one place, which means, we 
always restart from the beginning. We can config different timeout values for 
different stages, but the actual timeout when executing the operation will also 
be limited by the whole operation timeout config. This could simplify our 
logic. For example, we do not need to implement retry logic in rpc(Yeah I have 
just removed the reconnect logic in NettyRpcConnection). And it is also much 
easier for us to implement the backoff logic.

What do you think? [~stack] [~carp84] [~chenheng].

Thanks.

> Add first AsyncTable impl and create TableImpl based on it
> --
>
> Key: HBASE-15921
> URL: https://issues.apache.org/jira/browse/HBASE-15921
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Jurriaan Mous
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-15921-v2.patch, HBASE-15921.demo.patch, 
> HBASE-15921.patch, HBASE-15921.v1.patch
>
>
> First we create an AsyncTable interface with implementation without the Scan 
> functionality. Those will land in a separate patch since they need a refactor 
> of existing scans.
> Also added is a new TableImpl to replace HTable. It uses the AsyncTableImpl 
> internally and should be a bit faster because it does jump through less hoops 
> to do ProtoBuf transportation. This way we can run all existing tests on the 
> AsyncTableImpl to guarantee its quality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515249#comment-15515249
 ] 

Hadoop QA commented on HBASE-16677:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 49s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 24s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 133m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestBlockEvictionFromClient |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.procedure.TestCreateTableProcedure |
|   | org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures 
|
|   | org.apache.hadoop.hbase.master.snapshot.TestSnapshotFileCache |
|   | 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager
 |
|   | org.apache.hadoop.hbase.master.procedure.TestRestoreSnapshotProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829966/HBASE-16677_v3.patch |
| JIRA Issue | HBASE-16677 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 589aeb7283bf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 07ed155 |
| Default Java | 1.8.0_101 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3676/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/3676/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3676/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3676/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> 

[jira] [Commented] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515236#comment-15515236
 ] 

Heng Chen commented on HBASE-16677:
---

+1 for patch v3,   Not see regions hidden in your upload pic?
{code}
+  This table has <%= numRegions %> regions in total, in order to 
improve the page load time,
+ only <%= numRegionsRendered %> regions are displayed here, click
+ here to see all regions.
+<% } %>
{code}
I mean this hit in your pic.

> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, HBASE-16677_v3.patch, mini_cluster_master.png, 
> prod_cluster_partial.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16680) Reduce garbage in BufferChain

2016-09-22 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515224#comment-15515224
 ] 

Yu Li commented on HBASE-16680:
---

[~ashish singhi] I happen to find the typo of JIRA number in commit message 
(should be 16680 rather than 16880) , so I just revert and resubmitted it, FYI 
:-)
{noformat}
commit ce493642c0e295a08701cdcfe3ddc6755cdd7718
Author: Ashish Singhi 
Date:   Thu Sep 22 13:59:18 2016 +0530

HBASE-16880 Reduce garbage in BufferChain (binlijin)
{noformat}

> Reduce garbage in BufferChain
> -
>
> Key: HBASE-16680
> URL: https://issues.apache.org/jira/browse/HBASE-16680
> Project: HBase
>  Issue Type: Improvement
>Reporter: binlijin
>Assignee: binlijin
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16680-master.patch, HBASE-16680-master_v2.patch
>
>
> BufferChain accept a List and then convert it to ByteBuffer[], we 
> can directly produce ByteBuffer[] and handle it to BufferChain, so eliminate 
> the object List.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc

2016-09-22 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515178#comment-15515178
 ] 

Guanghao Zhang commented on HBASE-16682:


After HBASE-14734, HBaseTestingUtility use MiniKdc class and it is used in 
hbase-rsgroup and hbase-thrift package, too. But the unit tests in 
hbase-rsgroup and hbase-thrift didn't have this problem. So it is not a common 
problem and maybe we can only fix it in hbase-shell pom? 

> Fix Shell tests failure. NoClassDefFoundError for MiniKdc
> -
>
> Key: HBASE-16682
> URL: https://issues.apache.org/jira/browse/HBASE-16682
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-16682.master.001.patch, 
> HBASE-16682.master.002.patch
>
>
> Stacktrace
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975)
>   at org.jruby.javasupport.JavaClass.getMethods(JavaClass.java:2110)
>   at org.jruby.javasupport.JavaClass.setupClassMethods(JavaClass.java:955)
>   at org.jruby.javasupport.JavaClass.access$700(JavaClass.java:99)
>   at 
> org.jruby.javasupport.JavaClass$ClassInitializer.initialize(JavaClass.java:650)
>   at org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:689)
>   at org.jruby.javasupport.Java.createProxyClass(Java.java:526)
>   at org.jruby.javasupport.Java.getProxyClass(Java.java:455)
>   at org.jruby.javasupport.Java.getInstance(Java.java:364)
>   at 
> org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:166)
>   at 
> org.jruby.javasupport.JavaEmbedUtils.javaToRuby(JavaEmbedUtils.java:291)
>   at 
> org.jruby.embed.variable.AbstractVariable.updateByJavaObject(AbstractVariable.java:81)
>   at 
> org.jruby.embed.variable.GlobalVariable.(GlobalVariable.java:69)
>   at 
> org.jruby.embed.variable.GlobalVariable.getInstance(GlobalVariable.java:60)
>   at 
> org.jruby.embed.variable.VariableInterceptor.getVariableInstance(VariableInterceptor.java:97)
>   at org.jruby.embed.internal.BiVariableMap.put(BiVariableMap.java:321)
>   at org.jruby.embed.ScriptingContainer.put(ScriptingContainer.java:1123)
>   at 
> org.apache.hadoop.hbase.client.AbstractTestShell.setUpBeforeClass(AbstractTestShell.java:61)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:78)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:54)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: 

[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515164#comment-15515164
 ] 

Hudson commented on HBASE-16662:


FAILURE: Integrated in Jenkins build HBase-1.4 #426 (See 
[https://builds.apache.org/job/HBase-1.4/426/])
HBASE-16662 Fix open POODLE vulnerabilities (apurtell: rev 
69733040263d48a195d179c2390325a69416200f)
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/jetty/SslSelectChannelConnectorSecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIClientSocketFactorySecure.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIServerSocketFactorySecure.java


> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15315) Remove always set super user call as high priority

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515149#comment-15515149
 ] 

Hudson commented on HBASE-15315:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK7 #30 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/30/])
HBASE-15315 Remove always set super user call as high priority (Yong (apurtell: 
rev 32a7f2c4b80b4d5f05889f1669aa1538450b6d7a)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AnnotationReadingPriorityFunction.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityRpc.java


> Remove always set super user call as high priority
> --
>
> Key: HBASE-15315
> URL: https://issues.apache.org/jira/browse/HBASE-15315
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>
> Attachments: HBASE-15315.001.patch
>
>
> Current implementation set superuser call as ADMIN_QOS, but we have many 
> customers use superuser to do normal table operation such as put/get data and 
> so on. If client put much data during region assignment, RPC from HMaster may 
> timeout because of no handle. so it is better to remove always set super user 
> call as high priority. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515150#comment-15515150
 ] 

Hudson commented on HBASE-16662:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK7 #30 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/30/])
HBASE-16662 Fix open POODLE vulnerabilities (apurtell: rev 
e382b2c9f48cd896d525025c3965fa252f344e08)
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIServerSocketFactorySecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/jetty/SslSelectChannelConnectorSecure.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIClientSocketFactorySecure.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java


> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2

2016-09-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14123:
---
Attachment: 14123-master.v25.txt

Patch v25 syncs up to commit 9952b134fc26b3e4745b2a93afaddb12565975a7

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, 
> 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, 
> 14123-master.v3.txt, 14123-master.v5.txt, 14123-master.v6.txt, 
> 14123-master.v7.txt, 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, 
> HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, 
> HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, 
> HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, 
> HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, 
> HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, 
> HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2016-09-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515119#comment-15515119
 ] 

Ted Yu commented on HBASE-14123:


Ran the following tests with patch v24 locally:
{code}
Running org.apache.hadoop.hbase.regionserver.TestCompactionState
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.159 sec - in 
org.apache.hadoop.hbase.regionserver.TestCompactionState
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
Running 
org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoadWithOldClient
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.288 sec - 
in org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoadWithOldClient
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
Running org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush
Tests run: 104, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.693 sec - 
in org.apache.hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush
Running org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.745 sec - in 
org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover
{code}

> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, 
> 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v3.txt, 
> 14123-master.v5.txt, 14123-master.v6.txt, 14123-master.v7.txt, 
> 14123-master.v8.txt, 14123-master.v9.txt, 14123-v14.txt, 
> HBASE-14123-for-7912-v1.patch, HBASE-14123-for-7912-v6.patch, 
> HBASE-14123-v1.patch, HBASE-14123-v10.patch, HBASE-14123-v11.patch, 
> HBASE-14123-v12.patch, HBASE-14123-v13.patch, HBASE-14123-v15.patch, 
> HBASE-14123-v16.patch, HBASE-14123-v2.patch, HBASE-14123-v3.patch, 
> HBASE-14123-v4.patch, HBASE-14123-v5.patch, HBASE-14123-v6.patch, 
> HBASE-14123-v7.patch, HBASE-14123-v9.patch
>
>
> Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16678:
--
Status: Patch Available  (was: Open)

> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-09-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16678:
--
Attachment: hbase-16678_v1.patch

v1 patch. 

> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data

2016-09-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16604:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to rest of 1.1+. Test failures are flaky. 

> Scanner retries on IOException can cause the scans to miss data 
> 
>
> Key: HBASE-16604
> URL: https://issues.apache.org/jira/browse/HBASE-16604
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: hbase-16604_v1.patch, hbase-16604_v2.patch, 
> hbase-16604_v3.branch-1.patch, hbase-16604_v3.patch
>
>
> Debugging an ITBLL failure, where the Verify did not "see" all the data in 
> the cluster, I've noticed that if we end up getting a generic IOException 
> from the HFileReader level, we may end up missing the rest of the data in the 
> region. I was able to manually test this, and this stack trace helps to 
> understand what is going on: 
> {code}
> 2016-09-09 16:27:15,633 INFO  [hconnection-0x71ad3d8a-shared--pool21-t9] 
> client.ScannerCallable(376): Open scanner=1 for 
> scan={"loadColumnFamiliesOnDemand":null,"startRow":"","stopRow":"","batch":-1,"cacheBlocks":true,"totalColumns":1,"maxResultSize":2097152,"families":{"testFamily":["testFamily"]},"caching":100,"maxVersions":1,"timeRange":[0,9223372036854775807]}
>  on region 
> region=testScanThrowsException,,1473463632707.b2adfb618e5d0fe225c1dc40c0eabfee.,
>  hostname=hw10676,51833,1473463626529, seqNum=2
> 2016-09-09 16:27:15,634 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2196): scan request:scanner_id: 1 number_of_rows: 
> 100 close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true renew: false
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2510): Rolling back next call seqId
> 2016-09-09 16:27:15,635 INFO  
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] 
> regionserver.RSRpcServices(2565): Throwing new 
> ServiceExceptionjava.io.IOException: Could not reseek 
> StoreFileScanner[HFileScanner for reader 
> reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c,
>  compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, 
> currentSize=1567264, freeSize=1525578848, maxSize=1527146112, 
> heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, 
> multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, 
> lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, 
> avgValueLen=3, entries=17576, length=866998, 
> cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key 
> /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0
> 2016-09-09 16:27:15,635 DEBUG 
> [B.fifo.QRpcServer.handler=5,queue=0,port=51833] ipc.CallRunner(110): 
> B.fifo.QRpcServer.handler=5,queue=0,port=51833: callId: 26 service: 
> ClientService methodName: Scan size: 26 connection: 192.168.42.75:51903
> java.io.IOException: Could not reseek StoreFileScanner[HFileScanner for 
> reader 
> reader=hdfs://localhost:51795/user/enis/test-data/d6fb1c70-93c1-4099-acb7-5723fc05a737/data/default/testScanThrowsException/b2adfb618e5d0fe225c1dc40c0eabfee/testFamily/5a213cc23b714e5e8e1a140ebbe72f2c,
>  compression=none, cacheConf=blockCache=LruBlockCache{blockCount=0, 
> currentSize=1567264, freeSize=1525578848, maxSize=1527146112, 
> heapSize=1567264, minSize=1450788736, minFactor=0.95, multiSize=725394368, 
> multiFactor=0.5, singleSize=362697184, singleFactor=0.25}, 
> cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
> cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
> prefetchOnOpen=false, firstKey=aaa/testFamily:testFamily/1473463633859/Put, 
> lastKey=zzz/testFamily:testFamily/1473463634271/Put, avgKeyLen=35, 
> avgValueLen=3, entries=17576, length=866998, 
> cur=/testFamily:/OLDEST_TIMESTAMP/Minimum/vlen=0/seqid=0] to key 
> /testFamily:testFamily/LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:224)
>   at 
> org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
>

[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515046#comment-15515046
 ] 

Hudson commented on HBASE-16662:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #20 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/20/])
HBASE-16662 Fix open POODLE vulnerabilities (apurtell: rev 
73d4edbfa122c3d6c592a43171bacb61a7c69ca8)
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIServerSocketFactorySecure.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIClientSocketFactorySecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/jetty/SslSelectChannelConnectorSecure.java


> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515037#comment-15515037
 ] 

Hudson commented on HBASE-16662:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #27 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/27/])
HBASE-16662 Fix open POODLE vulnerabilities (apurtell: rev 
e382b2c9f48cd896d525025c3965fa252f344e08)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIServerSocketFactorySecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIClientSocketFactorySecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/jetty/SslSelectChannelConnectorSecure.java
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java


> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15315) Remove always set super user call as high priority

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15515036#comment-15515036
 ] 

Hudson commented on HBASE-15315:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #27 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/27/])
HBASE-15315 Remove always set super user call as high priority (Yong (apurtell: 
rev 32a7f2c4b80b4d5f05889f1669aa1538450b6d7a)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityRpc.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AnnotationReadingPriorityFunction.java


> Remove always set super user call as high priority
> --
>
> Key: HBASE-15315
> URL: https://issues.apache.org/jira/browse/HBASE-15315
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>
> Attachments: HBASE-15315.001.patch
>
>
> Current implementation set superuser call as ADMIN_QOS, but we have many 
> customers use superuser to do normal table operation such as put/get data and 
> so on. If client put much data during region assignment, RPC from HMaster may 
> timeout because of no handle. so it is better to remove always set super user 
> call as high priority. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16630:
---
Attachment: 16630-v3-suggest.patch

Patch with DEBUG log at the end of freeEntireBuckets().

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: 16630-v2-suggest.patch, 16630-v3-suggest.patch, 
> HBASE-16630-v2.patch, HBASE-16630-v3.patch, HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-16677:
--
Status: Patch Available  (was: Open)

> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, HBASE-16677_v3.patch, mini_cluster_master.png, 
> prod_cluster_partial.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Guang Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514989#comment-15514989
 ] 

Guang Yang commented on HBASE-16677:


[~enis], sure updated changing to use the utility. Thanks.

> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, HBASE-16677_v3.patch, mini_cluster_master.png, 
> prod_cluster_partial.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Guang Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guang Yang updated HBASE-16677:
---
Attachment: HBASE-16677_v3.patch

> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, HBASE-16677_v3.patch, mini_cluster_master.png, 
> prod_cluster_partial.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16656) BackupID must include backup set name

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514967#comment-15514967
 ] 

Hadoop QA commented on HBASE-16656:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HBASE-16656 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829964/HBASE-16656-v2.patch |
| JIRA Issue | HBASE-16656 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3675/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> BackupID must include backup set name
> -
>
> Key: HBASE-16656
> URL: https://issues.apache.org/jira/browse/HBASE-16656
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-16656-v1.patch, HBASE-16656-v2.patch
>
>
> Default backup set name is "backup". If we do backup for a backup set 
> "SomeSetName", by default, backup id will be generated in a form:
>  *SomeSetName_timestamp*.
> The goal is to separate backup images between different sets. 
> The history command will be updated and the new command - merge will use this 
> backup names (prefixes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16656) BackupID must include backup set name

2016-09-22 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-16656:
--
Attachment: HBASE-16656-v2.patch

Patch v2. 

> BackupID must include backup set name
> -
>
> Key: HBASE-16656
> URL: https://issues.apache.org/jira/browse/HBASE-16656
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-16656-v1.patch, HBASE-16656-v2.patch
>
>
> Default backup set name is "backup". If we do backup for a backup set 
> "SomeSetName", by default, backup id will be generated in a form:
>  *SomeSetName_timestamp*.
> The goal is to separate backup images between different sets. 
> The history command will be updated and the new command - merge will use this 
> backup names (prefixes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514954#comment-15514954
 ] 

Hadoop QA commented on HBASE-16688:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 43s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
13s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 123m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestBlockEvictionFromClient |
| Timed out junit tests | org.apache.hadoop.hbase.master.TestMasterFailover |
|   | org.apache.hadoop.hbase.master.TestMasterFailoverBalancerPersistence |
|   | org.apache.hadoop.hbase.master.TestTableLockManager |
|   | org.apache.hadoop.hbase.master.snapshot.TestSnapshotFileCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829940/HBASE-16688-v1.patch |
| JIRA Issue | HBASE-16688 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8a9a9d237c3d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4b05f40 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3673/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/3673/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3673/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-12894) Upgrade Jetty to 9.2.6

2016-09-22 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514927#comment-15514927
 ] 

Sean Busbey commented on HBASE-12894:
-

the master branch (what will be HBase 2.0) is already jdk 8 only. So let's just 
remove jetty 6 entirely from that branch and move to jetty 9 throughout.

> Upgrade Jetty to 9.2.6
> --
>
> Key: HBASE-12894
> URL: https://issues.apache.org/jira/browse/HBASE-12894
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.98.0
>Reporter: Rick Hallihan
>Assignee: Ted Yu
>  Labels: MicrosoftSupport
> Fix For: 2.0.0
>
>
> The Jetty component that is used for the HBase Stargate REST endpoint is 
> version 6.1.26 and is fairly outdated. We recently had a customer inquire 
> about enabling cross-origin resource sharing (CORS) for the REST endpoint and 
> found that this older version does not include the necessary filter or 
> configuration options, highlighted at: 
> http://wiki.eclipse.org/Jetty/Feature/Cross_Origin_Filter
> The Jetty project has had significant updates through versions 7, 8 and 9, 
> including a transition to be an Eclipse subproject, so updating to the latest 
> version may be non-trivial. The last update to the Jetty component in 
> https://issues.apache.org/jira/browse/HBASE-3377 was a minor version update 
> and did not require significant work. This update will include a package 
> namespace update so there will likely be a larger number of required changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16146) Counters are expensive...

2016-09-22 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514911#comment-15514911
 ] 

Enis Soztutar commented on HBASE-16146:
---

Maybe we can make it so that Counter just delegates to LongAdder if it is 
available.  

> Counters are expensive...
> -
>
> Key: HBASE-16146
> URL: https://issues.apache.org/jira/browse/HBASE-16146
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
> Attachments: HBASE-16146.branch-1.3.001.patch, counters.patch, 
> less_and_less_counters.png
>
>
> Doing workloadc, perf shows 10%+ of CPU being spent on counter#add. If I 
> disable some of the hot ones -- see patch -- I can get 10% more throughput 
> (390k to 440k). Figure something better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514905#comment-15514905
 ] 

Hadoop QA commented on HBASE-16688:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 4s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 111m 55s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 171m 4s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.regionserver.throttle.TestFlushWithThroughputController |
| Timed out junit tests | org.apache.hadoop.hbase.client.TestMetaWithReplicas |
|   | org.apache.hadoop.hbase.client.TestClientOperationInterrupt |
|   | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient |
|   | org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.0 Server=1.12.0 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829931/HBASE-16688-v0.patch |
| JIRA Issue | HBASE-16688 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 944314fd42af 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 83cf44c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3672/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/3672/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3672/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514893#comment-15514893
 ] 

Hudson commented on HBASE-16604:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1655 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1655/])
HBASE-16604 Scanner retries on IOException can cause the scans to miss (enis: 
rev 83cf44cd3f19c841ac53889d09454ed5247ce591)
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/IncrementCoalescer.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDelete.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionInfo.java"
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TMutation.java"
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TServerName.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/THRegionLocation.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TTimeRange.java"
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java"
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java"
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/DelegatingKeyValueScanner.java
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/IncrementCoalescerMBean.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCompareOp.java"
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnIncrement.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIOError.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIllegalArgument.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/generated/TScan.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TDeleteType.java"
* (add) 
hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/ScannerResetException.java
* (edit) 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TResult.java"
* (add) 
"hbase-thrift\036src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java"
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServer.java
* (add) 

[jira] [Commented] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514888#comment-15514888
 ] 

Enis Soztutar commented on HBASE-16677:
---

bq. It does not seem fit since here the store file is already MB, and in order 
to leverage it, we will need to convert it back to B and then call that method?
Yes. It will be consistent with all human-readable sizes that we use in the UI. 

> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, mini_cluster_master.png, prod_cluster_partial.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514868#comment-15514868
 ] 

Hudson commented on HBASE-16662:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK7 #1785 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1785/])
HBASE-16662 Fix open POODLE vulnerabilities (apurtell: rev 
97ce640f5d71cc10828a7895298e2bbb482b1068)
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIServerSocketFactorySecure.java
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RESTServer.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/SslRMIClientSocketFactorySecure.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/jetty/SslSelectChannelConnectorSecure.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/JMXListener.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java


> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16423) Add re-compare option to VerifyReplication to avoid occasional inconsistent rows

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514866#comment-15514866
 ] 

Hadoop QA commented on HBASE-16423:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
45m 53s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 1s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 31s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.wal.TestSecureWALReplay |
|   | org.apache.hadoop.hbase.regionserver.wal.TestAsyncLogRolling |
|   | org.apache.hadoop.hbase.regionserver.wal.TestLogRolling |
|   | org.apache.hadoop.hbase.mapreduce.TestTableMapReduce |
|   | org.apache.hadoop.hbase.regionserver.TestCompoundBloomFilter |
|   | org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829858/HBASE-16423-v2.patch |
| JIRA Issue | HBASE-16423 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux df7e1575e1fb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 83cf44c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3670/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3670/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| 

[jira] [Commented] (HBASE-16604) Scanner retries on IOException can cause the scans to miss data

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514849#comment-15514849
 ] 

Hadoop QA commented on HBASE-16604:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 55s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
47s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
26s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
15s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 57s 
{color} | {color:red} hbase-server in branch-1 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 10s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 39s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 147m 2s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
55s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 

[jira] [Commented] (HBASE-16146) Counters are expensive...

2016-09-22 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514767#comment-15514767
 ] 

Gary Helmling commented on HBASE-16146:
---

[~stack], I'm curious if this patch provides any improvement for your YCSB 
workload.

> Counters are expensive...
> -
>
> Key: HBASE-16146
> URL: https://issues.apache.org/jira/browse/HBASE-16146
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
> Attachments: HBASE-16146.branch-1.3.001.patch, counters.patch, 
> less_and_less_counters.png
>
>
> Doing workloadc, perf shows 10%+ of CPU being spent on counter#add. If I 
> disable some of the hot ones -- see patch -- I can get 10% more throughput 
> (390k to 440k). Figure something better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16146) Counters are expensive...

2016-09-22 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-16146:
--
Attachment: HBASE-16146.branch-1.3.001.patch

The attached patch removes the use of an instance-level ThreadLocal in Counter 
to store the per-thread last used cell index.  Instead, we just recompute the 
hash for each access.  This comes at a bit of a cost for writes, but provides 
big memory savings when many counters are in use, as well as avoiding spinning 
in ThreadLocalMap.getEntryAfterMiss().

> Counters are expensive...
> -
>
> Key: HBASE-16146
> URL: https://issues.apache.org/jira/browse/HBASE-16146
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
> Attachments: HBASE-16146.branch-1.3.001.patch, counters.patch, 
> less_and_less_counters.png
>
>
> Doing workloadc, perf shows 10%+ of CPU being spent on counter#add. If I 
> disable some of the hot ones -- see patch -- I can get 10% more throughput 
> (390k to 440k). Figure something better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12894) Upgrade Jetty to 9.2.6

2016-09-22 Thread Guang Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514751#comment-15514751
 ] 

Guang Yang commented on HBASE-12894:


Hi [~ted_yu], are you working on this?

We recently moved to Jetty 9 for REST and see good performance (will share more 
later), if you haven't started on this, I can work on a patch against master 
branch for review.

As for this effort, there are two things needs some discussion though: 1) Jetty 
9 needs JDK 8, which means we will need to move to JDK 8 at the same time. 2) 
The infoserver from HDFS is still on Jetty 6, so that we run both Jetty 9 and 
Jetty 6 together, that seems not a problem though per our testing as they are 
in different namespaces.

Thoughts?

> Upgrade Jetty to 9.2.6
> --
>
> Key: HBASE-12894
> URL: https://issues.apache.org/jira/browse/HBASE-12894
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.98.0
>Reporter: Rick Hallihan
>Assignee: Ted Yu
>  Labels: MicrosoftSupport
> Fix For: 2.0.0
>
>
> The Jetty component that is used for the HBase Stargate REST endpoint is 
> version 6.1.26 and is fairly outdated. We recently had a customer inquire 
> about enabling cross-origin resource sharing (CORS) for the REST endpoint and 
> found that this older version does not include the necessary filter or 
> configuration options, highlighted at: 
> http://wiki.eclipse.org/Jetty/Feature/Cross_Origin_Filter
> The Jetty project has had significant updates through versions 7, 8 and 9, 
> including a transition to be an Eclipse subproject, so updating to the latest 
> version may be non-trivial. The last update to the Jetty component in 
> https://issues.apache.org/jira/browse/HBASE-3377 was a minor version update 
> and did not require significant work. This update will include a package 
> namespace update so there will likely be a larger number of required changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16685) Revisit execution of SnapshotCopy in MapReduceBackupCopyService

2016-09-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514740#comment-15514740
 ] 

Vladimir Rodionov commented on HBASE-16685:
---

One handler blocked for hours is OK as long as we have 100 of them. I do not 
see any issue here. I do not mind changing code if it is not a big one though.  
Master does not have direct dependencies on M/R I think - only on backup 
package.  

> Revisit execution of SnapshotCopy in MapReduceBackupCopyService
> ---
>
> Key: HBASE-16685
> URL: https://issues.apache.org/jira/browse/HBASE-16685
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>  Labels: backup
>
> During code review of backup / restore, Matteo raised comment on the 
> following code:
> {code}
> res = snapshotCp.run(options);
> {code}
> I think this will be a first time where HBase server has a direct dependency 
> on a MR job.
> also we need to revisit this code to avoid having one handler blocked for N 
> hours while we are doing a copy.
> This issue is to address the above comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16146) Counters are expensive...

2016-09-22 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514741#comment-15514741
 ] 

Gary Helmling commented on HBASE-16146:
---

We've seen Counter come up as a source of high CPU utilization in 1.3, 
especially since the switch of metrics to use FastLongHistogram (each instance 
of which uses 260 Counter instances internally) from HBASE-15222.  I think this 
is due to the use of the instance-level ThreadLocal in Counter to track the 
per-thread cell index, as perf output on hot nodes shows a huge amount of time 
in ThreadLocalMap.getEntryAfterMiss().  As the number of Counter instances (and 
ThreadLocal instances) we're retaining in memory grows, performance seems to 
degrade.

This is all moot for master, since we've already deprecated Counter and 
replaced its usage with LongAdder.  But we still need a solution for Counter in 
branch-1.  I'm testing a patch which removes the ThreadLocal usage, which I'll 
attach here.  Benchmarking shows a small reduction in Counter performance, but 
a big improvement in FastLongHistogram performance when many histograms are 
retained in memory.

> Counters are expensive...
> -
>
> Key: HBASE-16146
> URL: https://issues.apache.org/jira/browse/HBASE-16146
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
> Attachments: counters.patch, less_and_less_counters.png
>
>
> Doing workloadc, perf shows 10%+ of CPU being spent on counter#add. If I 
> disable some of the hot ones -- see patch -- I can get 10% more throughput 
> (390k to 440k). Figure something better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Guang Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514732#comment-15514732
 ] 

Guang Yang commented on HBASE-16677:


Thanks [~enis] for looking at the patch.

{quote}
Do we actually stop rendering here? I did not see it. Have you tested with more 
than 10K regions?
{quote}
The rendering part is in the *if* block, thus if it reaches to the threshold, 
it would stop rendering more regions. I didn't test it with 10K but I tested 
with query parameter like "numRegions=10" and it truncate the number of  
regions to 10 in this case.

{quote}
Please use TraditionalBinaryPrefix.long2String() instead of this:
{quote}
It does not seem fit since here the store file is already MB, and in order to 
leverage it, we will need to convert it back to B and then call that method?

{quote}
Can we please change the param name from regions to numRegions, and also have a 
link in the warning message to the same page with the parameter set to render 
all regions.
{quote}
Fixed, thanks.

> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, mini_cluster_master.png, prod_cluster_partial.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16677) Add table size (total store file size) to table page

2016-09-22 Thread Guang Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guang Yang updated HBASE-16677:
---
Attachment: HBASE-16677_v2.patch

> Add table size (total store file size) to table page
> 
>
> Key: HBASE-16677
> URL: https://issues.apache.org/jira/browse/HBASE-16677
> Project: HBase
>  Issue Type: New Feature
>  Components: website
>Reporter: Guang Yang
>Priority: Minor
> Attachments: HBASE-16677_v0.patch, HBASE-16677_v1.patch, 
> HBASE-16677_v2.patch, mini_cluster_master.png, prod_cluster_partial.png
>
>
> Currently there is not an easy way to get the table size from the web UI, 
> though we have the region size on the page, it is still convenient to have a 
> table for the table size stat.
> Another pain point is that when the table grow large with tens of thousands 
> of regions, it took extremely long time to load the page, however, sometimes 
> we don't want to check all the regions. An optimization could be to accept a 
> query parameter to specify the number of regions to render.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16671) Split TestExportSnapshot

2016-09-22 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16671:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Split TestExportSnapshot
> 
>
> Key: HBASE-16671
> URL: https://issues.apache.org/jira/browse/HBASE-16671
> Project: HBase
>  Issue Type: Test
>  Components: snapshots, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16671-v0.patch
>
>
> TestExportSnapshot contains 3 type of tests. 
>  - MiniCluster creating a table, taking a snapshot and running export
>  - Mocked snapshot running export
>  - tool helpers tests
> since now we have everything packed in a single test. 2 and 3 ended up having 
> a before and after that is creating a table and taking a snapshot which is 
> not used. Move those tests out and cut some time from the test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16657) Expose per-region last major compaction timestamp in RegionServer UI

2016-09-22 Thread Dustin Pho (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514698#comment-15514698
 ] 

Dustin Pho commented on HBASE-16657:


Regarding manual steps taken to verify  -- 
1. use major_compact in bin/hbase shell
2. Go to the corresponding :/rs-status#regionCompactStats. The 
last major compaction should be updated.

I don't think my changes affect the failed junit tests.

> Expose per-region last major compaction timestamp in RegionServer UI
> 
>
> Key: HBASE-16657
> URL: https://issues.apache.org/jira/browse/HBASE-16657
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, UI
>Reporter: Gary Helmling
>Assignee: Dustin Pho
>Priority: Minor
> Attachments: HBASE-16657.001.patch, with-patch-changes.png, 
> without-patch-changes.png
>
>
> HBASE-12859 added some tracking for the last major compaction completed for 
> each region.  However, this is currently only exposed through the cluster 
> status reporting and the Admin API.  Since the regionserver is already 
> reporting this information, it would be nice to fold it in somewhere to the 
> region listing in the regionserver UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16657) Expose per-region last major compaction timestamp in RegionServer UI

2016-09-22 Thread Dustin Pho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Pho updated HBASE-16657:
---
Attachment: without-patch-changes.png

> Expose per-region last major compaction timestamp in RegionServer UI
> 
>
> Key: HBASE-16657
> URL: https://issues.apache.org/jira/browse/HBASE-16657
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, UI
>Reporter: Gary Helmling
>Assignee: Dustin Pho
>Priority: Minor
> Attachments: HBASE-16657.001.patch, with-patch-changes.png, 
> without-patch-changes.png
>
>
> HBASE-12859 added some tracking for the last major compaction completed for 
> each region.  However, this is currently only exposed through the cluster 
> status reporting and the Admin API.  Since the regionserver is already 
> reporting this information, it would be nice to fold it in somewhere to the 
> region listing in the regionserver UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514682#comment-15514682
 ] 

Matteo Bertozzi commented on HBASE-16688:
-

TestRegionServerMetrics was other stuff I was working on, removed in v1. 
and renamed TestMasterProcedureWAL in TestMasterProcedureWalLease

> Split TestMasterFailoverWithProcedures
> --
>
> Key: HBASE-16688
> URL: https://issues.apache.org/jira/browse/HBASE-16688
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-16688-v0.patch, HBASE-16688-v1.patch
>
>
> extract the WAL lease tests from the TestMasterFailoverWithProcedures. 
> leaving TestMasterFailoverWithProcedures with only the proc test on master 
> failover



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16657) Expose per-region last major compaction timestamp in RegionServer UI

2016-09-22 Thread Dustin Pho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Pho updated HBASE-16657:
---
Attachment: with-patch-changes.png

> Expose per-region last major compaction timestamp in RegionServer UI
> 
>
> Key: HBASE-16657
> URL: https://issues.apache.org/jira/browse/HBASE-16657
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver, UI
>Reporter: Gary Helmling
>Assignee: Dustin Pho
>Priority: Minor
> Attachments: HBASE-16657.001.patch, with-patch-changes.png
>
>
> HBASE-12859 added some tracking for the last major compaction completed for 
> each region.  However, this is currently only exposed through the cluster 
> status reporting and the Admin API.  Since the regionserver is already 
> reporting this information, it would be nice to fold it in somewhere to the 
> region listing in the regionserver UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15315) Remove always set super user call as high priority

2016-09-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15315:
---
Fix Version/s: 1.2.4

> Remove always set super user call as high priority
> --
>
> Key: HBASE-15315
> URL: https://issues.apache.org/jira/browse/HBASE-15315
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yong Zhang
>Assignee: Yong Zhang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4
>
> Attachments: HBASE-15315.001.patch
>
>
> Current implementation set superuser call as ADMIN_QOS, but we have many 
> customers use superuser to do normal table operation such as put/get data and 
> so on. If client put much data during region assignment, RPC from HMaster may 
> timeout because of no handle. so it is better to remove always set super user 
> call as high priority. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514645#comment-15514645
 ] 

Vladimir Rodionov commented on HBASE-16630:
---

Yes, my bad. Somehow missed ! 



> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: 16630-v2-suggest.patch, HBASE-16630-v2.patch, 
> HBASE-16630-v3.patch, HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514637#comment-15514637
 ] 

Hadoop QA commented on HBASE-14123:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 5s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 47 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 14m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
59s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s 
{color} | {color:red} hbase-common in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 15m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
4s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch has 654 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 20s 
{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 24s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 1s 
{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s 
{color} | {color:red} hbase-client generated 4 new + 14 unchanged - 0 fixed = 
18 total (was 14) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 42s 
{color} | {color:red} root generated 4 new + 20 unchanged - 0 fixed = 24 total 
(was 20) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | 

[jira] [Updated] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16688:

Attachment: HBASE-16688-v1.patch

> Split TestMasterFailoverWithProcedures
> --
>
> Key: HBASE-16688
> URL: https://issues.apache.org/jira/browse/HBASE-16688
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-16688-v0.patch, HBASE-16688-v1.patch
>
>
> extract the WAL lease tests from the TestMasterFailoverWithProcedures. 
> leaving TestMasterFailoverWithProcedures with only the proc test on master 
> failover



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14417) Incremental backup and bulk loading

2016-09-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514628#comment-15514628
 ] 

Ted Yu commented on HBASE-14417:


Currently there is one BulkLoadHandler inside each region server which writes 
BulkLoadDescriptor periodically to hbase:backup table.
Ideally only BulkLoadDescriptor's for tables which have gone through full 
backup should be written.

Looking for a way to pass Set of such tables to region servers so that each 
server doesn't have to poll hbase:backup table periodically.

> Incremental backup and bulk loading
> ---
>
> Key: HBASE-14417
> URL: https://issues.apache.org/jira/browse/HBASE-14417
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>Priority: Critical
>  Labels: backup
> Fix For: 2.0.0
>
> Attachments: 14417.v1.txt, 14417.v2.txt
>
>
> Currently, incremental backup is based on WAL files. Bulk data loading 
> bypasses WALs for obvious reasons, breaking incremental backups. The only way 
> to continue backups after bulk loading is to create new full backup of a 
> table. This may not be feasible for customers who do bulk loading regularly 
> (say, every day).
> Google doc for design:
> https://docs.google.com/document/d/1ACCLsecHDvzVSasORgqqRNrloGx4mNYIbvAU7lq5lJE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16676) All RPC requests serviced by PriorityRpcServer in some deploys after HBASE-13375

2016-09-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-16676:
---
   Resolution: Duplicate
 Assignee: (was: Andrew Purtell)
Fix Version/s: (was: 1.2.4)
   Status: Resolved  (was: Patch Available)

> All RPC requests serviced by PriorityRpcServer in some deploys after 
> HBASE-13375
> 
>
> Key: HBASE-16676
> URL: https://issues.apache.org/jira/browse/HBASE-16676
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.2.1, 1.2.2, 1.2.3
>Reporter: Andrew Purtell
> Attachments: HBASE-16676-branch-1.2.patch
>
>
> I have been trying to track down why 1.2.x won't sometimes pass a 1 billion 
> row ITBLL run while 0.98.22 and 1.1.6 will always, and a defeat of RPC 
> prioritization could explain it. We get stuck during the loading phase and 
> the loader job eventually fails. 
> All testing is done in an insecure environment under the same UNIX user 
> (clusterdock) so effectively all ops are issued by the superuser.
> Doing unrelated work - or so I thought! - I was looking at object allocations 
> by YCSB workload by thread and when looking at the RegionServer RPC threads 
> noticed that for 0.98.22 and 1.1.6, as expected, the vast majority of 
> allocations are from threads named "B.defaultRpcServer.handler*". In 1.2.0 
> and up, instead the vast majority are from threads named 
> "PriorityRpcServer.handler*" with very little from threads named 
> "B.defaultRpcServer.handler*".  A git bisect to find the change that causes 
> this leads to HBASE-13375, and so of course this makes sense out of what I am 
> seeing, but is this really what we want? What about production environments 
> (insecure and degenerate secure) where all ops are effectively issued by the 
> superuser? We run one of these at Salesforce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16676) All RPC requests serviced by PriorityRpcServer in some deploys after HBASE-13375

2016-09-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514613#comment-15514613
 ] 

Andrew Purtell commented on HBASE-16676:




> All RPC requests serviced by PriorityRpcServer in some deploys after 
> HBASE-13375
> 
>
> Key: HBASE-16676
> URL: https://issues.apache.org/jira/browse/HBASE-16676
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.2.1, 1.2.2, 1.2.3
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.2.4
>
> Attachments: HBASE-16676-branch-1.2.patch
>
>
> I have been trying to track down why 1.2.x won't sometimes pass a 1 billion 
> row ITBLL run while 0.98.22 and 1.1.6 will always, and a defeat of RPC 
> prioritization could explain it. We get stuck during the loading phase and 
> the loader job eventually fails. 
> All testing is done in an insecure environment under the same UNIX user 
> (clusterdock) so effectively all ops are issued by the superuser.
> Doing unrelated work - or so I thought! - I was looking at object allocations 
> by YCSB workload by thread and when looking at the RegionServer RPC threads 
> noticed that for 0.98.22 and 1.1.6, as expected, the vast majority of 
> allocations are from threads named "B.defaultRpcServer.handler*". In 1.2.0 
> and up, instead the vast majority are from threads named 
> "PriorityRpcServer.handler*" with very little from threads named 
> "B.defaultRpcServer.handler*".  A git bisect to find the change that causes 
> this leads to HBASE-13375, and so of course this makes sense out of what I am 
> seeing, but is this really what we want? What about production environments 
> (insecure and degenerate secure) where all ops are effectively issued by the 
> superuser? We run one of these at Salesforce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc

2016-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514604#comment-15514604
 ] 

Hadoop QA commented on HBASE-16682:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 49s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
6s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12829929/HBASE-16682.master.002.patch
 |
| JIRA Issue | HBASE-16682 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 03b544b7526a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 83cf44c |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3671/testReport/ |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/3671/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix Shell tests failure. NoClassDefFoundError for MiniKdc
> -
>
> Key: HBASE-16682
> URL: https://issues.apache.org/jira/browse/HBASE-16682
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-16682.master.001.patch, 
> HBASE-16682.master.002.patch
>
>
> Stacktrace
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at 

[jira] [Commented] (HBASE-16676) All RPC requests serviced by PriorityRpcServer in some deploys after HBASE-13375

2016-09-22 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514596#comment-15514596
 ] 

Dima Spivak commented on HBASE-16676:
-

Wow, awesome find, [~apurtell]! As [~stack] mentioned, I saw reproducible 
failures in the generator phase of ITBLL while testing 1.2.3, but just assumed 
something was wrong with my machine. Time for me to start trusting tests more 
:).

> All RPC requests serviced by PriorityRpcServer in some deploys after 
> HBASE-13375
> 
>
> Key: HBASE-16676
> URL: https://issues.apache.org/jira/browse/HBASE-16676
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.2.1, 1.2.2, 1.2.3
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.2.4
>
> Attachments: HBASE-16676-branch-1.2.patch
>
>
> I have been trying to track down why 1.2.x won't sometimes pass a 1 billion 
> row ITBLL run while 0.98.22 and 1.1.6 will always, and a defeat of RPC 
> prioritization could explain it. We get stuck during the loading phase and 
> the loader job eventually fails. 
> All testing is done in an insecure environment under the same UNIX user 
> (clusterdock) so effectively all ops are issued by the superuser.
> Doing unrelated work - or so I thought! - I was looking at object allocations 
> by YCSB workload by thread and when looking at the RegionServer RPC threads 
> noticed that for 0.98.22 and 1.1.6, as expected, the vast majority of 
> allocations are from threads named "B.defaultRpcServer.handler*". In 1.2.0 
> and up, instead the vast majority are from threads named 
> "PriorityRpcServer.handler*" with very little from threads named 
> "B.defaultRpcServer.handler*".  A git bisect to find the change that causes 
> this leads to HBASE-13375, and so of course this makes sense out of what I am 
> seeing, but is this really what we want? What about production environments 
> (insecure and degenerate secure) where all ops are effectively issued by the 
> superuser? We run one of these at Salesforce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-16662:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.2.4
   0.98.23
   1.1.7
   1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514560#comment-15514560
 ] 

Andrew Purtell edited comment on HBASE-16662 at 9/22/16 9:41 PM:
-

Going to commit to all branches. 

For 0.98, in order to avoid POODLE vulnerabilities in the embedded servlets 
(via InfoServer) it is also necessary to build HBase with a version of Hadoop 
that has HADOOP-11260, which I think is 2.5.2 or later. The default 0.98 build 
specifies 2.2.0. I'm sure nobody runs that in production at this point, but we 
still do that for sake of build compatibility.


was (Author: apurtell):
Going to commit to all branches. 

For 0.98, in order to avoid POODLE vulnerabilities it is also necessary to 
build HBase with a version of Hadoop that has HADOOP-11260, which I think is 
2.5.2 or later. The default 0.98 build specifies 2.2.0. I'm sure nobody runs 
that in production at this point, but we still do that for sake of build 
compatibility.

> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16662) Fix open POODLE vulnerabilities

2016-09-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514560#comment-15514560
 ] 

Andrew Purtell commented on HBASE-16662:


Going to commit to all branches. 

For 0.98, in order to avoid POODLE vulnerabilities it is also necessary to 
build HBase with a version of Hadoop that has HADOOP-11260, which I think is 
2.5.2 or later. The default 0.98 build specifies 2.2.0. I'm sure nobody runs 
that in production at this point, but we still do that for sake of build 
compatibility.

> Fix open POODLE vulnerabilities
> ---
>
> Key: HBASE-16662
> URL: https://issues.apache.org/jira/browse/HBASE-16662
> Project: HBase
>  Issue Type: Bug
>  Components: REST, Thrift
>Reporter: Ben Lau
>Assignee: Ben Lau
> Attachments: HBASE-16662-master.patch
>
>
> We recently found a security issue in our HBase REST servers.  The issue is a 
> variant of the POODLE vulnerability (https://en.wikipedia.org/wiki/POODLE) 
> and is present in the HBase Thrift server as well.  It also appears to affect 
> the JMXListener coprocessor.  The vulnerabilities probably affect all 
> versions of HBase that have the affected services.  (If you don't use the 
> affected services with SSL then this ticket probably doesn't affect you).
> Included is a patch to fix the known POODLE vulnerabilities in master.  Let 
> us know if we missed any.  From our end we only personally encountered the 
> HBase REST vulnerability.  We do not use the Thrift server or JMXListener 
> coprocessor but discovered those problems after discussing the issue with 
> some of the HBase PMCs.
> Coincidentally, Hadoop recently committed a SslSelectChannelConnectorSecure 
> which is more or less the same as one of the fixes in this patch.  Hadoop 
> wasn't originally affected by the vulnerability in the 
> SslSelectChannelConnector, but about a month ago they committed HADOOP-12765 
> which does use that class, so they added a SslSelectChannelConnectorSecure 
> class similar to this patch.  Since this class is present in Hadoop 2.7.4+ 
> which hasn't been released yet, we will for now just include our own version 
> instead of depending on the Hadoop version.
> After the patch is approved for master we can backport as necessary to older 
> versions of HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16671) Split TestExportSnapshot

2016-09-22 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514524#comment-15514524
 ] 

Jonathan Hsieh commented on HBASE-16671:


+1 lgtm.

> Split TestExportSnapshot
> 
>
> Key: HBASE-16671
> URL: https://issues.apache.org/jira/browse/HBASE-16671
> Project: HBase
>  Issue Type: Test
>  Components: snapshots, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16671-v0.patch
>
>
> TestExportSnapshot contains 3 type of tests. 
>  - MiniCluster creating a table, taking a snapshot and running export
>  - Mocked snapshot running export
>  - tool helpers tests
> since now we have everything packed in a single test. 2 and 3 ended up having 
> a before and after that is creating a table and taking a snapshot which is 
> not used. Move those tests out and cut some time from the test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514516#comment-15514516
 ] 

Jonathan Hsieh commented on HBASE-16688:



MasterProcedureTestingUtility, TestMasterFailoverWithProcedures, and 
TestMasterProcedureWAL content wise look fine.  Maybe rename 
TestMasterPrcedureWAL to something to mention lease recovery or fencing?

Are the changes to TestRegionServerMetrics intentionally included?


> Split TestMasterFailoverWithProcedures
> --
>
> Key: HBASE-16688
> URL: https://issues.apache.org/jira/browse/HBASE-16688
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-16688-v0.patch
>
>
> extract the WAL lease tests from the TestMasterFailoverWithProcedures. 
> leaving TestMasterFailoverWithProcedures with only the proc test on master 
> failover



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc

2016-09-22 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514408#comment-15514408
 ] 

Appy edited comment on HBASE-16682 at 9/22/16 9:11 PM:
---

(edited)
[~zghaobac], i thought about it and realized later that the right fix would 
actually be adding the deps in --hbase-common--.
I have been playing with pom for last 30 min. But can't seem to figure it out. 
The right fix is adding minikdc deps in hbase-testing-util, but that's not 
fixing it.


was (Author: appy):
[~zghaobac], i thought about it and realized later that the right fix would 
actually be adding the deps in hbase-common.
Putting up the patch.

> Fix Shell tests failure. NoClassDefFoundError for MiniKdc
> -
>
> Key: HBASE-16682
> URL: https://issues.apache.org/jira/browse/HBASE-16682
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-16682.master.001.patch, 
> HBASE-16682.master.002.patch
>
>
> Stacktrace
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975)
>   at org.jruby.javasupport.JavaClass.getMethods(JavaClass.java:2110)
>   at org.jruby.javasupport.JavaClass.setupClassMethods(JavaClass.java:955)
>   at org.jruby.javasupport.JavaClass.access$700(JavaClass.java:99)
>   at 
> org.jruby.javasupport.JavaClass$ClassInitializer.initialize(JavaClass.java:650)
>   at org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:689)
>   at org.jruby.javasupport.Java.createProxyClass(Java.java:526)
>   at org.jruby.javasupport.Java.getProxyClass(Java.java:455)
>   at org.jruby.javasupport.Java.getInstance(Java.java:364)
>   at 
> org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:166)
>   at 
> org.jruby.javasupport.JavaEmbedUtils.javaToRuby(JavaEmbedUtils.java:291)
>   at 
> org.jruby.embed.variable.AbstractVariable.updateByJavaObject(AbstractVariable.java:81)
>   at 
> org.jruby.embed.variable.GlobalVariable.(GlobalVariable.java:69)
>   at 
> org.jruby.embed.variable.GlobalVariable.getInstance(GlobalVariable.java:60)
>   at 
> org.jruby.embed.variable.VariableInterceptor.getVariableInstance(VariableInterceptor.java:97)
>   at org.jruby.embed.internal.BiVariableMap.put(BiVariableMap.java:321)
>   at org.jruby.embed.ScriptingContainer.put(ScriptingContainer.java:1123)
>   at 
> org.apache.hadoop.hbase.client.AbstractTestShell.setUpBeforeClass(AbstractTestShell.java:61)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:78)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:54)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> 

[jira] [Comment Edited] (HBASE-16683) Address review comments for mega patch of backup / restore feature

2016-09-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514158#comment-15514158
 ] 

Ted Yu edited comment on HBASE-16683 at 9/22/16 9:10 PM:
-

Checking in after running backup tests.

Will keep the JIRA open for further comments.


was (Author: yuzhih...@gmail.com):
Clarification: waiting for more review comments before checking in.

> Address review comments for mega patch of backup / restore feature
> --
>
> Key: HBASE-16683
> URL: https://issues.apache.org/jira/browse/HBASE-16683
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>  Labels: backup
> Attachments: 16683.v1.txt
>
>
> There're review comments for the mega patch posted over HBASE-14123.
> See https://reviews.apache.org/r/51823/
> This issue is for addressing review comments in HBASE-7912 branch.
> Mega patch for master branch would be posted / tested on HBASE-14123.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16687) Remove MaxPermSize from surefire/failsave command line

2016-09-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-16687.

   Resolution: Duplicate
 Assignee: (was: Andrew Purtell)
Fix Version/s: (was: 2.0.0)

> Remove MaxPermSize from surefire/failsave command line
> --
>
> Key: HBASE-16687
> URL: https://issues.apache.org/jira/browse/HBASE-16687
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Priority: Trivial
> Attachments: HBASE-16687.patch
>
>
> Master branch requires Java 8, so we can eliminate the reason for these noisy 
> messages:
> {noformat}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16687) Remove MaxPermSize from surefire/failsave command line

2016-09-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514459#comment-15514459
 ] 

Andrew Purtell commented on HBASE-16687:


Funny how that works with different people coming to the same conclusion around 
the same time. Fine to close this as dup 

> Remove MaxPermSize from surefire/failsave command line
> --
>
> Key: HBASE-16687
> URL: https://issues.apache.org/jira/browse/HBASE-16687
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-16687.patch
>
>
> Master branch requires Java 8, so we can eliminate the reason for these noisy 
> messages:
> {noformat}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16688:

Status: Patch Available  (was: Open)

> Split TestMasterFailoverWithProcedures
> --
>
> Key: HBASE-16688
> URL: https://issues.apache.org/jira/browse/HBASE-16688
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-16688-v0.patch
>
>
> extract the WAL lease tests from the TestMasterFailoverWithProcedures. 
> leaving TestMasterFailoverWithProcedures with only the proc test on master 
> failover



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16681) Fix flaky TestReplicationSourceManagerZkImpl

2016-09-22 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514426#comment-15514426
 ] 

Appy commented on HBASE-16681:
--

[~ashu210890] confirmed that it's connected to jira mentioned above.
https://issues.apache.org/jira/browse/HBASE-16096?focusedCommentId=15514351=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15514351

> Fix flaky TestReplicationSourceManagerZkImpl
> 
>
> Key: HBASE-16681
> URL: https://issues.apache.org/jira/browse/HBASE-16681
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ashu Pachauri
>
> Stack Trace
> {noformat}
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:233)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:118)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.init(HBaseInterClusterReplicationEndpoint.java:119)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.getReplicationSource(ReplicationSourceManager.java:502)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.addSource(ReplicationSourceManager.java:273)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.init(ReplicationSourceManager.java:246)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.TestReplicationSourceManager.testLogRoll(TestReplicationSourceManager.java:228)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:231)
>   at 
> 

[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514424#comment-15514424
 ] 

Ted Yu commented on HBASE-16630:


{code}
+  Set candidateBuckets = bucketAllocator.getLeastFilledBuckets(
+  inUseBuckets, completelyFreeBucketsNeeded);
...
+   * @param excludedBuckets the buckets that need to be excluded due to
+   *currently being in used
{code}
In use buckets are excluded from freeing.

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: 16630-v2-suggest.patch, HBASE-16630-v2.patch, 
> HBASE-16630-v3.patch, HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16688:

Attachment: HBASE-16688-v0.patch

> Split TestMasterFailoverWithProcedures
> --
>
> Key: HBASE-16688
> URL: https://issues.apache.org/jira/browse/HBASE-16688
> Project: HBase
>  Issue Type: Bug
>  Components: proc-v2, test
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-16688-v0.patch
>
>
> extract the WAL lease tests from the TestMasterFailoverWithProcedures. 
> leaving TestMasterFailoverWithProcedures with only the proc test on master 
> failover



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16681) Fix flaky TestReplicationSourceManagerZkImpl

2016-09-22 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-16681:
-
Assignee: Ashu Pachauri

> Fix flaky TestReplicationSourceManagerZkImpl
> 
>
> Key: HBASE-16681
> URL: https://issues.apache.org/jira/browse/HBASE-16681
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Ashu Pachauri
>
> Stack Trace
> {noformat}
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:233)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:118)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.init(HBaseInterClusterReplicationEndpoint.java:119)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.getReplicationSource(ReplicationSourceManager.java:502)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.addSource(ReplicationSourceManager.java:273)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.init(ReplicationSourceManager.java:246)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.TestReplicationSourceManager.testLogRoll(TestReplicationSourceManager.java:228)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.reflect.InvocationTargetException: null
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:231)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:118)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.init(HBaseInterClusterReplicationEndpoint.java:119)
>   at 
> 

[jira] [Created] (HBASE-16688) Split TestMasterFailoverWithProcedures

2016-09-22 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-16688:
---

 Summary: Split TestMasterFailoverWithProcedures
 Key: HBASE-16688
 URL: https://issues.apache.org/jira/browse/HBASE-16688
 Project: HBase
  Issue Type: Bug
  Components: proc-v2, test
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
 Fix For: 2.0.0


extract the WAL lease tests from the TestMasterFailoverWithProcedures. leaving 
TestMasterFailoverWithProcedures with only the proc test on master failover



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc

2016-09-22 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-16682:
-
Attachment: HBASE-16682.master.002.patch

> Fix Shell tests failure. NoClassDefFoundError for MiniKdc
> -
>
> Key: HBASE-16682
> URL: https://issues.apache.org/jira/browse/HBASE-16682
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-16682.master.001.patch, 
> HBASE-16682.master.002.patch
>
>
> Stacktrace
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975)
>   at org.jruby.javasupport.JavaClass.getMethods(JavaClass.java:2110)
>   at org.jruby.javasupport.JavaClass.setupClassMethods(JavaClass.java:955)
>   at org.jruby.javasupport.JavaClass.access$700(JavaClass.java:99)
>   at 
> org.jruby.javasupport.JavaClass$ClassInitializer.initialize(JavaClass.java:650)
>   at org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:689)
>   at org.jruby.javasupport.Java.createProxyClass(Java.java:526)
>   at org.jruby.javasupport.Java.getProxyClass(Java.java:455)
>   at org.jruby.javasupport.Java.getInstance(Java.java:364)
>   at 
> org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:166)
>   at 
> org.jruby.javasupport.JavaEmbedUtils.javaToRuby(JavaEmbedUtils.java:291)
>   at 
> org.jruby.embed.variable.AbstractVariable.updateByJavaObject(AbstractVariable.java:81)
>   at 
> org.jruby.embed.variable.GlobalVariable.(GlobalVariable.java:69)
>   at 
> org.jruby.embed.variable.GlobalVariable.getInstance(GlobalVariable.java:60)
>   at 
> org.jruby.embed.variable.VariableInterceptor.getVariableInstance(VariableInterceptor.java:97)
>   at org.jruby.embed.internal.BiVariableMap.put(BiVariableMap.java:321)
>   at org.jruby.embed.ScriptingContainer.put(ScriptingContainer.java:1123)
>   at 
> org.apache.hadoop.hbase.client.AbstractTestShell.setUpBeforeClass(AbstractTestShell.java:61)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:78)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:54)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.minikdc.MiniKdc
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at 

[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514407#comment-15514407
 ] 

Vladimir Rodionov commented on HBASE-16630:
---

[~dvdreddy]
OK, I was not right - do not see any significant memory overhead, but I have 
another concern

You collect all buckets that have active blocks (refCount != 0), then clear 
them up. This is counterintuitive. You evict Most Recently Used blocks?

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: 16630-v2-suggest.patch, HBASE-16630-v2.patch, 
> HBASE-16630-v3.patch, HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16682) Fix Shell tests failure. NoClassDefFoundError for MiniKdc

2016-09-22 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514408#comment-15514408
 ] 

Appy commented on HBASE-16682:
--

[~zghaobac], i thought about it and realized later that the right fix would 
actually be adding the deps in hbase-common.
Putting up the patch.

> Fix Shell tests failure. NoClassDefFoundError for MiniKdc
> -
>
> Key: HBASE-16682
> URL: https://issues.apache.org/jira/browse/HBASE-16682
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-16682.master.001.patch, 
> HBASE-16682.master.002.patch
>
>
> Stacktrace
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/hadoop/minikdc/MiniKdc
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethods(Class.java:1975)
>   at org.jruby.javasupport.JavaClass.getMethods(JavaClass.java:2110)
>   at org.jruby.javasupport.JavaClass.setupClassMethods(JavaClass.java:955)
>   at org.jruby.javasupport.JavaClass.access$700(JavaClass.java:99)
>   at 
> org.jruby.javasupport.JavaClass$ClassInitializer.initialize(JavaClass.java:650)
>   at org.jruby.javasupport.JavaClass.setupProxy(JavaClass.java:689)
>   at org.jruby.javasupport.Java.createProxyClass(Java.java:526)
>   at org.jruby.javasupport.Java.getProxyClass(Java.java:455)
>   at org.jruby.javasupport.Java.getInstance(Java.java:364)
>   at 
> org.jruby.javasupport.JavaUtil.convertJavaToUsableRubyObject(JavaUtil.java:166)
>   at 
> org.jruby.javasupport.JavaEmbedUtils.javaToRuby(JavaEmbedUtils.java:291)
>   at 
> org.jruby.embed.variable.AbstractVariable.updateByJavaObject(AbstractVariable.java:81)
>   at 
> org.jruby.embed.variable.GlobalVariable.(GlobalVariable.java:69)
>   at 
> org.jruby.embed.variable.GlobalVariable.getInstance(GlobalVariable.java:60)
>   at 
> org.jruby.embed.variable.VariableInterceptor.getVariableInstance(VariableInterceptor.java:97)
>   at org.jruby.embed.internal.BiVariableMap.put(BiVariableMap.java:321)
>   at org.jruby.embed.ScriptingContainer.put(ScriptingContainer.java:1123)
>   at 
> org.apache.hadoop.hbase.client.AbstractTestShell.setUpBeforeClass(AbstractTestShell.java:61)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:108)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:78)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:54)
>   at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:144)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.minikdc.MiniKdc
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at 

[jira] [Commented] (HBASE-16656) BackupID must include backup set name

2016-09-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514386#comment-15514386
 ] 

Ted Yu commented on HBASE-16656:


{code}
+if(setName == null) {
+  setName = "backup";
{code}
Should "backup" be defined as constant ?
{code}
+  if(name1.equals(name2)) {
{code}
nit: insert space between if and '('
{code}
+String backupId = (setName == null? 
BackupRestoreConstants.BACKUPID_PREFIX: setName) +
+"_"+EnvironmentEdgeManager.currentTime();
{code}
If BACKUPID_PREFIX is used, there would be double underscore's.

> BackupID must include backup set name
> -
>
> Key: HBASE-16656
> URL: https://issues.apache.org/jira/browse/HBASE-16656
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Attachments: HBASE-16656-v1.patch
>
>
> Default backup set name is "backup". If we do backup for a backup set 
> "SomeSetName", by default, backup id will be generated in a form:
>  *SomeSetName_timestamp*.
> The goal is to separate backup images between different sets. 
> The history command will be updated and the new command - merge will use this 
> backup names (prefixes)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16687) Remove MaxPermSize from surefire/failsave command line

2016-09-22 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514378#comment-15514378
 ] 

Jerry He commented on HBASE-16687:
--

HBASE-16667?

> Remove MaxPermSize from surefire/failsave command line
> --
>
> Key: HBASE-16687
> URL: https://issues.apache.org/jira/browse/HBASE-16687
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-16687.patch
>
>
> Master branch requires Java 8, so we can eliminate the reason for these noisy 
> messages:
> {noformat}
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
> support was removed in 8.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16096) Replication keeps accumulating znodes

2016-09-22 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514351#comment-15514351
 ] 

Ashu Pachauri commented on HBASE-16096:
---

[~appy] Yeah, you got it right. The test asks the ReplicationSourceManager to 
remove the peer but never cleans it up from the zookeeper. When I look at the 
patch now, I also notice that it does a couple steps in the add_peer  workflow 
also that are redundant. I can clean the test up, it should be a tiny change.

> Replication keeps accumulating znodes
> -
>
> Key: HBASE-16096
> URL: https://issues.apache.org/jira/browse/HBASE-16096
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16096-branch-1.patch, HBASE-16096.patch
>
>
> If there is an error while creating the replication source on adding the 
> peer, the source if not added to the in memory list of sources but the 
> replication peer is. 
> However, in such a scenario, when you remove the peer, it is deleted from 
> zookeeper successfully but for removing the in memory list of peers, we wait 
> for the corresponding sources to get deleted (which as we said don't exist 
> because of error creating the source). 
> The problem here is the ordering of operations for adding/removing source and 
> peer. 
> Modifying the code to always remove queues from the underlying storage, even 
> if there exists no sources also requires a small refactoring of 
> TableBasedReplicationQueuesImpl to not abort on removeQueues() of an empty 
> queue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16423) Add re-compare option to VerifyReplication to avoid occasional inconsistent rows

2016-09-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514335#comment-15514335
 ] 

Ted Yu commented on HBASE-16423:


lgtm

> Add re-compare option to VerifyReplication to avoid occasional inconsistent 
> rows
> 
>
> Key: HBASE-16423
> URL: https://issues.apache.org/jira/browse/HBASE-16423
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Priority: Minor
> Attachments: HBASE-16423-v1.patch, HBASE-16423-v2.patch
>
>
> Because replication keeps eventually consistency, VerifyReplication may 
> report inconsistent rows if there are data being written to source or peer 
> clusters during scanning. These occasionally inconsistent rows will have the 
> same data if we do the comparison again after a short period. It is not easy 
> to find the really inconsistent rows if VerifyReplication report a large 
> number of such occasionally inconsistency. To avoid this case, we can add an 
> option to make VerifyReplication read out the inconsistent rows again after 
> sleeping a few seconds and re-compare the rows during scanning. This behavior 
> follows the eventually consistency of hbase's replication. Suggestions and 
> discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16423) Add re-compare option to VerifyReplication to avoid occasional inconsistent rows

2016-09-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16423:
---
Status: Patch Available  (was: Open)

> Add re-compare option to VerifyReplication to avoid occasional inconsistent 
> rows
> 
>
> Key: HBASE-16423
> URL: https://issues.apache.org/jira/browse/HBASE-16423
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 2.0.0
>Reporter: Jianwei Cui
>Priority: Minor
> Attachments: HBASE-16423-v1.patch, HBASE-16423-v2.patch
>
>
> Because replication keeps eventually consistency, VerifyReplication may 
> report inconsistent rows if there are data being written to source or peer 
> clusters during scanning. These occasionally inconsistent rows will have the 
> same data if we do the comparison again after a short period. It is not easy 
> to find the really inconsistent rows if VerifyReplication report a large 
> number of such occasionally inconsistency. To avoid this case, we can add an 
> option to make VerifyReplication read out the inconsistent rows again after 
> sleeping a few seconds and re-compare the rows during scanning. This behavior 
> follows the eventually consistency of hbase's replication. Suggestions and 
> discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2016-09-22 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514311#comment-15514311
 ] 

Ben Manes commented on HBASE-15560:
---

I think that I addressed all of the comments, except where noted as unclear. 
Please take another look when you have a chance.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
> Attachments: HBASE-15560.patch, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-22 Thread deepankar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514299#comment-15514299
 ] 

deepankar commented on HBASE-16630:
---

Which map are you referring to [~vrodionov] , I am constructing a couple of 
sets which are atmost 0(bucket  count).

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: 16630-v2-suggest.patch, HBASE-16630-v2.patch, 
> HBASE-16630-v3.patch, HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16630) Fragmentation in long running Bucket Cache

2016-09-22 Thread deepankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

deepankar updated HBASE-16630:
--
Attachment: HBASE-16630-v3.patch

Sorry for delay in adding the suggestion [~tedyu], I attached a patch now which 
in addition to your suggestions contains a couple of import fixes. Also I 
tested the patch on couple of machines, every thing looked fine. we are doing a 
cluster wide deploy today, will report on that results

> Fragmentation in long running Bucket Cache
> --
>
> Key: HBASE-16630
> URL: https://issues.apache.org/jira/browse/HBASE-16630
> Project: HBase
>  Issue Type: Bug
>  Components: BucketCache
>Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3
>Reporter: deepankar
>Assignee: deepankar
> Attachments: 16630-v2-suggest.patch, HBASE-16630-v2.patch, 
> HBASE-16630-v3.patch, HBASE-16630.patch
>
>
> As we are running bucket cache for a long time in our system, we are 
> observing cases where some nodes after some time does not fully utilize the 
> bucket cache, in some cases it is even worse in the sense they get stuck at a 
> value < 0.25 % of the bucket cache (DEFAULT_MEMORY_FACTOR as all our tables 
> are configured in-memory for simplicity sake).
> We took a heap dump and analyzed what is happening and saw that is classic 
> case of fragmentation, current implementation of BucketCache (mainly 
> BucketAllocator) relies on the logic that fullyFreeBuckets are available for 
> switching/adjusting cache usage between different bucketSizes . But once a 
> compaction / bulkload happens and the blocks are evicted from a bucket size , 
> these are usually evicted from random places of the buckets of a bucketSize 
> and thus locking the number of buckets associated with a bucketSize and in 
> the worst case of the fragmentation we have seen some bucketSizes with 
> occupancy ratio of <  10 % But they dont have any completelyFreeBuckets to 
> share with the other bucketSize. 
> Currently the existing eviction logic helps in the cases where cache used is 
> more the MEMORY_FACTOR or MULTI_FACTOR and once those evictions are also 
> done, the eviction (freeSpace function) will not evict anything and the cache 
> utilization will be stuck at that value without any allocations for other 
> required sizes.
> The fix for this we came up with is simple that we do deFragmentation ( 
> compaction) of the bucketSize and thus increasing the occupancy ratio and 
> also freeing up the buckets to be fullyFree, this logic itself is not 
> complicated as the bucketAllocator takes care of packing the blocks in the 
> buckets, we need evict and re-allocate the blocks for all the BucketSizes 
> that dont fit the criteria.
> I am attaching an initial patch just to give an idea of what we are thinking 
> and I'll improve it based on the comments from the community.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16672) Add option for bulk load to copy hfile(s) instead of renaming

2016-09-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514292#comment-15514292
 ] 

Ted Yu commented on HBASE-16672:


Test failures were not related.
I ran a few with patch which passed:
{code}
Running org.apache.hadoop.hbase.client.TestHCM
Tests run: 24, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 324.538 sec - 
in org.apache.hadoop.hbase.client.TestHCM
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
Running org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 128.417 sec - 
in org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient
{code}

> Add option for bulk load to copy hfile(s) instead of renaming
> -
>
> Key: HBASE-16672
> URL: https://issues.apache.org/jira/browse/HBASE-16672
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 16672.v1.txt, 16672.v2.txt, 16672.v3.txt, 16672.v4.txt, 
> 16672.v5.txt, 16672.v6.txt
>
>
> This is related to HBASE-14417, to support incrementally restoring to 
> multiple destinations, this issue adds option which would always copy 
> hfile(s) during bulk load.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12088) Remove un-used profiles in non-root poms

2016-09-22 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15514272#comment-15514272
 ] 

Nick Dimiduk commented on HBASE-12088:
--

Likewise for 1.1, assuming it lands in 1.2.

> Remove un-used profiles in non-root poms
> 
>
> Key: HBASE-12088
> URL: https://issues.apache.org/jira/browse/HBASE-12088
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Jonathan Hsieh
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-12088.v0.patch
>
>
> Some of the poms still have hadoop 1 and hadoop 1.1 profiles even though the 
> root pom has them removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >