[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372286#comment-15372286
 ] 

Hudson commented on HBASE-16081:


FAILURE: Integrated in HBase-1.3 #778 (See 
[https://builds.apache.org/job/HBase-1.3/778/])
HBASE-16081 Wait for Replication Tasks to complete before killing the (antonov: 
rev 0fda2bc9e7cbd58d4e67d0e9dcc420bc7ea98eab)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationEndpoint.java


> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16142) Trigger JFR session when under duress -- e.g. backed-up request queue count -- and dump the recording to log dir

2016-07-11 Thread Konstantin Ryakhovskiy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Ryakhovskiy updated HBASE-16142:
---
Attachment: HBASE-16142.master.002.patch

modified imports

> Trigger JFR session when under duress -- e.g. backed-up request queue count 
> -- and dump the recording to log dir
> 
>
> Key: HBASE-16142
> URL: https://issues.apache.org/jira/browse/HBASE-16142
> Project: HBase
>  Issue Type: Task
>  Components: Operability
>Reporter: stack
>Assignee: Konstantin Ryakhovskiy
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-16142.master.001.patch, 
> HBASE-16142.master.002.patch
>
>
> Chatting today w/ a mighty hbase operator on how to figure what is happening 
> during transitory latency spike or any other transitory 'weirdness' in a 
> server, the idea came up that a java flight recording during a spike would 
> include a pretty good picture of what is going on during the time of duress 
> (more ideal would be a trace of the explicit slow queries showing call stack 
> with timings dumped to a sink for later review; i.e. trigger an htrace when a 
> query is slow...).
> Taking a look, programmatically triggering a JFR recording seems doable, if 
> awkward (MBean invocations). There is even a means of specifying 'triggers' 
> based off any published mbean emission -- e.g. a query queue count threshold 
> -- which looks nice. See 
> https://community.oracle.com/thread/3676275?start=0&tstart=0 and 
> https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/run.htm#JFRUH184
> This feature could start out as a blog post describing how to do it for one 
> server. A plugin on Canary that looks at mbean values and if over a 
> configured threshold, triggers a recording remotely could be next. Finally 
> could integrate a couple of triggers that fire when issue via the trigger 
> mechanism.
> Marking as beginner feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372280#comment-15372280
 ] 

Zhihua Deng commented on HBASE-16212:
-

The stack trace looks like this:

at org.apache.hadoop.hdfs.DFSInputStream.seek(DFSInputStream.java:1509)
at 
org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:62)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1357)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1591)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1470)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:437)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:708)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:833)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:828)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:845)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:865)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:139)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:596)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5486)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5637)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5424)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)

> Many Connections are created by wrong seeking pos on InputStream
> 
>
> Key: HBASE-16212
> URL: https://issues.apache.org/jira/browse/HBASE-16212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Zhihua Deng
>
> As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
> is suffering from logging the same repeatedly. Adding log to DFSInputStream, 
> it outputs as follows:
> 2016-07-10 21:31:42,147 INFO  
> [B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
> DFSClient_NONMAPREDUCE_1984924661_1 seek 
> DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
>  for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
> 111506876, targetPos: 111506843
>  ...
> As the pos of this input stream is larger than targetPos(the pos trying to 
> seek), A new connection to the datanode will be created, the older one will 
> be closed as a consequence. When the wrong seeking ops are large, the 
> datanode's block scanner info message is spamming logs, as well as many 
> connections to the same datanode will be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-16183:
-
Fix Version/s: 2.0.0
Affects Version/s: 1.2.0
   Status: Patch Available  (was: Open)

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.2.0
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch
>
>
> There are some errors in the example programs for coprocessor in Ref Guide. 
> Such as using deprecated APIs, generic...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372276#comment-15372276
 ] 

Xiang Li edited comment on HBASE-16183 at 7/12/16 6:48 AM:
---

Hi [~jerryhe], [~carp84], would you please review the patch v1 at your most 
convenience?
The only divergence could be that I proposed the following code could be all 
removed when overriding preGetOp() in class RegionObserverExample in section 
90.1, due to HBASE-11871
{code}
List kvs = new ArrayList(results.size());
for (Cell c : results) {
  kvs.add(KeyValueUtil.ensureKeyValue(c));
}
preGet(e, get, kvs);
results.clear();
results.addAll(kvs);
{code}
to make it as
{code}
@Override
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
throws IOException {

  if (Bytes.equals(get.getRow(),ADMIN)) {
Cell c = CellUtil.createCell(get.getRow(),COLUMN _FAMILY, COLUMN,
System.currentTimeMillis(), (byte)4, VALUE);
results.add(c);
e.bypass();
  }
}
{code}


was (Author: water):
Hi [~jerryhe], [~carp84], would you please review the patch v1 at your most 
convenience?
The only divergence could be that I proposed the following code could be all 
removed when overriding preGetOp() in class RegionObserverExample in section 
90.1
{code}
List kvs = new ArrayList(results.size());
for (Cell c : results) {
  kvs.add(KeyValueUtil.ensureKeyValue(c));
}
preGet(e, get, kvs);
results.clear();
results.addAll(kvs);
{code}
to make it as
{code}
@Override
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
throws IOException {

  if (Bytes.equals(get.getRow(),ADMIN)) {
Cell c = CellUtil.createCell(get.getRow(),COLUMN _FAMILY, COLUMN,
System.currentTimeMillis(), (byte)4, VALUE);
results.add(c);
e.bypass();
  }
}
{code}

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch
>
>
> There are some errors in the example programs for coprocessor in Ref Guide. 
> Such as using deprecated APIs, generic...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372276#comment-15372276
 ] 

Xiang Li commented on HBASE-16183:
--

Hi [~jerryhe], [~carp84], would you please review the patch v1 at your most 
convenience?
The only divergence could be that I proposed the following code could be all 
removed when overriding preGetOp() in class RegionObserverExample in section 
90.1
{code}
List kvs = new ArrayList(results.size());
for (Cell c : results) {
  kvs.add(KeyValueUtil.ensureKeyValue(c));
}
preGet(e, get, kvs);
results.clear();
results.addAll(kvs);
{code}
to make it as
{code}
@Override
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
throws IOException {

  if (Bytes.equals(get.getRow(),ADMIN)) {
Cell c = CellUtil.createCell(get.getRow(),COLUMN _FAMILY, COLUMN,
System.currentTimeMillis(), (byte)4, VALUE);
results.add(c);
e.bypass();
  }
}
{code}

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch
>
>
> There are some errors in the example programs for coprocessor in Ref Guide. 
> Such as using deprecated APIs, generic...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-16183:
-
Attachment: HBASE-16183-master-v1.patch

Patch v1 for master branch.

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183-master-v1.patch, HBASE-16183.patch
>
>
> There are some errors in the example programs for coprocessor in Ref Guide. 
> Such as using deprecated APIs, generic...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372253#comment-15372253
 ] 

Xiang Li commented on HBASE-16183:
--

Patch v1 for mater branch fixes the following errors:

1. In Section 89.3.3
Change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into
{code}
Path path = new 
Path("hdfs://:/user//coprocessor.jar");
{code}
Reason:
The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
Change
{code}
HBaseAdmin admin = new HBaseAdmin(conf);
{code}
into
{code}
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
{code}
Reason:
HBASE-12083 makes "new HBaseAdmin()" deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
Change
{code}
public void preGetOp(final ObserverContext e, final Get get, final List results)
{code}
into
{code}
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
{code}
Change
{code}
public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
{code}
into
{code}
public RegionScanner preScannerOpen(final 
ObserverContext e, final Scan scan,
{code}
Change
{code}
public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s, final List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
into
{code}
public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s, final 
List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
Change
{code}
Iterator iterator = results.iterator();
{code}
into
{code}
Iterator iterator = results.iterator();
{code}
Reason:
Generic

4. In section 90.1
Remove the following code
{code}
List kvs = new ArrayList(results.size());
for (Cell c : results) {
  kvs.add(KeyValueUtil.ensureKeyValue(c));
}
preGet(e, get, kvs);
results.clear();
results.addAll(kvs);
{code}
Reason:
They are of no use after HBASE-11871 is applied.

5. Section 90.1
change 
{code}
Cell c = CellUtil.createCell(get.getRow(),COLUMN _FAMILY, COLUMN,
System.currentTimeMillis(), (byte)4, VALUE);
{code}
to
{code}
Cell c = CellUtil.createCell(get.getRow(),COLUMN_FAMILY, COLUMN,
System.currentTimeMillis(), (byte)4, VALUE);
{code}
Reason
It is a typo, there is a whitespace between "COLUMN" and "_FAMILY"

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183.patch
>
>
> There are some errors in the example programs for coprocessor in Ref Guide. 
> Such as using deprecated APIs, generic...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372248#comment-15372248
 ] 

Hudson commented on HBASE-16081:


SUCCESS: Integrated in HBase-1.4 #281 (See 
[https://builds.apache.org/job/HBase-1.4/281/])
HBASE-16081 Wait for Replication Tasks to complete before killing the (antonov: 
rev 7fa311a9408ab8d1028d1a788aa88f65da447628)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java


> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16110) AsyncFS WAL doesn't work with Hadoop 2.8+

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372217#comment-15372217
 ] 

Hadoop QA commented on HBASE-16110:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 42s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestMasterReplication |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817327/HBASE-16110-v1.patch |
| JIRA Issue | HBASE-16110 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ccf293d |
| Default Java | 1.7.0_80 |
| Multi-JDK vers

[jira] [Commented] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372219#comment-15372219
 ] 

Zhihua Deng commented on HBASE-16212:
-

Yes, these are logs i added to the DFSInputStream#seek(long targetPos):
if(pos > targetPos) {
  DFSClient.LOG.info(dfsClient.getClientName() + " seek " + 
getCurrentDatanode() + " for " + getCurrentBlock() +
  ". pos: " + pos + ", targetPos: " + targetPos);
}

>From the log, it implies that the client(regionserver) reads behind the 
>inputstream with 33 bytes

> Many Connections are created by wrong seeking pos on InputStream
> 
>
> Key: HBASE-16212
> URL: https://issues.apache.org/jira/browse/HBASE-16212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Zhihua Deng
>
> As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
> is suffering from logging the same repeatedly. Adding log to DFSInputStream, 
> it outputs as follows:
> 2016-07-10 21:31:42,147 INFO  
> [B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
> DFSClient_NONMAPREDUCE_1984924661_1 seek 
> DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
>  for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
> 111506876, targetPos: 111506843
>  ...
> As the pos of this input stream is larger than targetPos(the pos trying to 
> seek), A new connection to the datanode will be created, the older one will 
> be closed as a consequence. When the wrong seeking ops are large, the 
> datanode's block scanner info message is spamming logs, as well as many 
> connections to the same datanode will be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-16183:
-
Description: There are some errors in the example programs for coprocessor 
in Ref Guide. Such as using deprecated APIs, generic...  (was: There are some 
errors in the example programs for )

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183.patch
>
>
> There are some errors in the example programs for coprocessor in Ref Guide. 
> Such as using deprecated APIs, generic...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-16183:
-
Description: There are some errors in the example programs for 

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183.patch
>
>
> There are some errors in the example programs for 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372214#comment-15372214
 ] 

Xiang Li commented on HBASE-16183:
--

[~jinghe], [~carp84], thanks for taking care of this! I changed to use a new 
computer so a wrong user is used. Sorry...

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183.patch
>
>
> 1. In Section 89.3.3
> change
> {code}
> String path = "hdfs://:/user//coprocessor.jar";
> {code}
> into
> {code}
> Path path = new 
> Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
> {code}
> Reason:
>   The second parameter of HTableDescriptor.addCoprocessor() is 
> org.apache.hadoop.fs.Path, not String.
>   See 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html
> 2. In Section 89.3.3
> change
> {code}
> HBaseAdmin admin = new HBaseAdmin(conf);
> {code}
> into
> {code}
> Connection connection = ConnectionFactory.createConnection(conf);
> Admin admin = connection.getAdmin();
> {code}
> Reason:
>   HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
> supposed to get from Connection.getAdmin()
>   Also see 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html
> 3. In section 90.1
> change
> {code}
> public void preGetOp(final ObserverContext e, final Get get, final List 
> results)
> {code}
> into
> {code}
> public void preGetOp(final ObserverContext e, 
> final Get get, final List results)
> {code}
> change
> {code}
> List kvs = new ArrayList(results.size());
> {code}
> into
> {code}
> List kvs = new ArrayList(results.size());
> {code}
> change
> {code}
> public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
> {code}
> into
> {code}
> preScannerOpen(final ObserverContext e, final 
> Scan scan,
> {code}
> change
> {code}
> public boolean postScannerNext(final ObserverContext e, final InternalScanner 
> s, final List results, final int limit, final boolean hasMore) throws 
> IOException {
> {code}
> into
> {code}
> public boolean postScannerNext(final 
> ObserverContext e, final InternalScanner s, 
> final List results, final int limit, final boolean hasMore) throws 
> IOException {
> {code}
> change
> {code}
> Iterator iterator = results.iterator();
> {code}
> into
> {code}
> Iterator iterator = results.iterator();
> {code}
> Reason:
>   Generic
> 4. In section 90.1
> change
> {code}
> preGet(e, get, kvs);
> {code}
> into
> {code}
> super.preGetOp(e, get, kvs);
> {code}
> Reason:
>   There is not a function called preGet() provided by BaseRegionObserver or 
> its super class/interface. I believe we need to call preGetOp() of the super 
> class of RegionObserverExample here.
> 5. In section 90.1
> change
> {code}
> kvs.add(KeyValueUtil.ensureKeyValue(c));
> {code}
> into
> {code}
> kvs.add(c);
> {code}
> Reason:
>   KeyValueUtil.ensureKeyValue() is deprecated.
>   See 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
>   and https://issues.apache.org/jira/browse/HBASE-12079



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Xiang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HBASE-16183:
-
Description: (was: 1. In Section 89.3.3
change
{code}
String path = "hdfs://:/user//coprocessor.jar";
{code}
into
{code}
Path path = new 
Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
{code}
Reason:
  The second parameter of HTableDescriptor.addCoprocessor() is 
org.apache.hadoop.fs.Path, not String.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html

2. In Section 89.3.3
change
{code}
HBaseAdmin admin = new HBaseAdmin(conf);
{code}
into
{code}
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
{code}
Reason:
  HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
supposed to get from Connection.getAdmin()
  Also see 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html

3. In section 90.1
change
{code}
public void preGetOp(final ObserverContext e, final Get get, final List results)
{code}
into
{code}
public void preGetOp(final ObserverContext e, 
final Get get, final List results)
{code}
change
{code}
List kvs = new ArrayList(results.size());
{code}
into
{code}
List kvs = new ArrayList(results.size());
{code}
change
{code}
public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
{code}
into
{code}
preScannerOpen(final ObserverContext e, final 
Scan scan,
{code}
change
{code}
public boolean postScannerNext(final ObserverContext e, final InternalScanner 
s, final List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
into
{code}
public boolean postScannerNext(final 
ObserverContext e, final InternalScanner s, final 
List results, final int limit, final boolean hasMore) throws 
IOException {
{code}
change
{code}
Iterator iterator = results.iterator();
{code}
into
{code}
Iterator iterator = results.iterator();
{code}
Reason:
  Generic

4. In section 90.1
change
{code}
preGet(e, get, kvs);
{code}
into
{code}
super.preGetOp(e, get, kvs);
{code}
Reason:
  There is not a function called preGet() provided by BaseRegionObserver or its 
super class/interface. I believe we need to call preGetOp() of the super class 
of RegionObserverExample here.

5. In section 90.1
change
{code}
kvs.add(KeyValueUtil.ensureKeyValue(c));
{code}
into
{code}
kvs.add(c);
{code}
Reason:
  KeyValueUtil.ensureKeyValue() is deprecated.
  See 
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
  and https://issues.apache.org/jira/browse/HBASE-12079)

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16184) Shell test fails due to rLoadSink being nil

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372198#comment-15372198
 ] 

Hadoop QA commented on HBASE-16184:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 49s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 47s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
10s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817332/HBASE-16184-v1.patch |
| JIRA Issue | HBASE-16184 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ccf293d |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/JDK_1.7_latest_:1.7.0_80 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2602/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/2602/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Shell test fails due to rLoadSink being nil
> ---
>
> Key: HBASE-16184
> URL: https://issues.apache.org/jira/browse/HBASE-16184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Phil Yang
> Attachments: HBASE-16184-v1.patch
>
>
> Running TestShell ended up with the following on master branch:
> {code}
>   1) Error:
> test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/h

[jira] [Commented] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372197#comment-15372197
 ] 

Hadoop QA commented on HBASE-14743:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 39s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
42s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 141m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
|   | hadoop.hbase.replication.TestMasterReplication |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817325/HBASE-14743.0

[jira] [Comment Edited] (HBASE-16213) A new HFileBlock structure for fast random get

2016-07-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372193#comment-15372193
 ] 

binlijin edited comment on HBASE-16213 at 7/12/16 5:25 AM:
---

ROW_INDEX_V1 is the first version.
ROW_INDEX_V2 store column family only once.
And the third version store every rowKey only once have not implement.


was (Author: aoxiang):
ROW_INDEX_V1 is the first version.
ROW_INDEX_V2 store column family only once.

> A new HFileBlock structure for fast random get
> --
>
> Key: HBASE-16213
> URL: https://issues.apache.org/jira/browse/HBASE-16213
> Project: HBase
>  Issue Type: New Feature
>Reporter: binlijin
> Attachments: HBASE-16213.patch
>
>
> HFileBlock store cells sequential, current when to get a row from the block, 
> it scan from the first cell until the row's cell.
> The new structure store every row's start offset with data, so it can find 
> the exact row with binarySearch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16213) A new HFileBlock structure for fast random get

2016-07-11 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372193#comment-15372193
 ] 

binlijin commented on HBASE-16213:
--

ROW_INDEX_V1 is the first version.
ROW_INDEX_V2 store column family only once.

> A new HFileBlock structure for fast random get
> --
>
> Key: HBASE-16213
> URL: https://issues.apache.org/jira/browse/HBASE-16213
> Project: HBase
>  Issue Type: New Feature
>Reporter: binlijin
> Attachments: HBASE-16213.patch
>
>
> HFileBlock store cells sequential, current when to get a row from the block, 
> it scan from the first cell until the row's cell.
> The new structure store every row's start offset with data, so it can find 
> the exact row with binarySearch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16213) A new HFileBlock structure for fast random get

2016-07-11 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-16213:
-
Attachment: HBASE-16213.patch

> A new HFileBlock structure for fast random get
> --
>
> Key: HBASE-16213
> URL: https://issues.apache.org/jira/browse/HBASE-16213
> Project: HBase
>  Issue Type: New Feature
>Reporter: binlijin
> Attachments: HBASE-16213.patch
>
>
> HFileBlock store cells sequential, current when to get a row from the block, 
> it scan from the first cell until the row's cell.
> The new structure store every row's start offset with data, so it can find 
> the exact row with binarySearch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-16213) A new HFileBlock structure for fast random get

2016-07-11 Thread binlijin (JIRA)
binlijin created HBASE-16213:


 Summary: A new HFileBlock structure for fast random get
 Key: HBASE-16213
 URL: https://issues.apache.org/jira/browse/HBASE-16213
 Project: HBase
  Issue Type: New Feature
Reporter: binlijin


HFileBlock store cells sequential, current when to get a row from the block, it 
scan from the first cell until the row's cell.
The new structure store every row's start offset with data, so it can find the 
exact row with binarySearch.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372168#comment-15372168
 ] 

Anoop Sam John edited comment on HBASE-15643 at 7/12/16 4:43 AM:
-

LruBlockCache 
{code}
 stats.incrementBlockCountForTable(cacheKey.getTableName());
366 stats.incrementBlockSizeForTable(cacheKey.getTableName(), 
cb.heapSize());
{code}
L1/L2 cache look up and read is in hot path and there adding code like above 
which involve a Map lookup operation will be perf hit. That is what the comment 
means.  Agree with it.  We should be careful abt doing this.


was (Author: anoop.hbase):
LruBlockCache 
{code}
 stats.incrementBlockCountForTable(cacheKey.getTableName());
366 stats.incrementBlockSizeForTable(cacheKey.getTableName(), 
cb.heapSize());
{code}
L1/L2 cache look up and read is in hot path and there adding code like above 
which involve a Map lookup operation will be perf hit. That is what the comment 
means.

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16184) Shell test fails due to rLoadSink being nil

2016-07-11 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16184:
--
Attachment: HBASE-16184-v1.patch

Add a skip logic in admin.rb

> Shell test fails due to rLoadSink being nil
> ---
>
> Key: HBASE-16184
> URL: https://issues.apache.org/jira/browse/HBASE-16184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Phil Yang
> Attachments: HBASE-16184-v1.patch
>
>
> Running TestShell ended up with the following on master branch:
> {code}
>   1) Error:
> test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:427:in 
> `test_Get_replication_sink_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_Get_replication_source_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:423:in 
> `test_Get_replication_source_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   3) Error:
> test_Get_replication_status(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:419:in `test_Get_replication_status'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16184) Shell test fails due to rLoadSink being nil

2016-07-11 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-16184:
--
Status: Patch Available  (was: Reopened)

> Shell test fails due to rLoadSink being nil
> ---
>
> Key: HBASE-16184
> URL: https://issues.apache.org/jira/browse/HBASE-16184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Phil Yang
> Attachments: HBASE-16184-v1.patch
>
>
> Running TestShell ended up with the following on master branch:
> {code}
>   1) Error:
> test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:427:in 
> `test_Get_replication_sink_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_Get_replication_source_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:423:in 
> `test_Get_replication_source_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   3) Error:
> test_Get_replication_status(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:419:in `test_Get_replication_status'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-16184) Shell test fails due to rLoadSink being nil

2016-07-11 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang reassigned HBASE-16184:
-

Assignee: Phil Yang

> Shell test fails due to rLoadSink being nil
> ---
>
> Key: HBASE-16184
> URL: https://issues.apache.org/jira/browse/HBASE-16184
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Phil Yang
> Attachments: HBASE-16184-v1.patch
>
>
> Running TestShell ended up with the following on master branch:
> {code}
>   1) Error:
> test_Get_replication_sink_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:427:in 
> `test_Get_replication_sink_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_Get_replication_source_metrics_information(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:423:in 
> `test_Get_replication_source_metrics_information'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   3) Error:
> test_Get_replication_status(Hbase::AdminAlterTableTest):
> NoMethodError: undefined method `getAgeOfLastAppliedOp' for nil:NilClass
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:725:in `status'
> 
> file:/home/hbase/.m2/repository/org/jruby/jruby-complete/1.6.8/jruby-complete-1.6.8.jar!/builtin/java/java.util.rb:7:in
>  `each'
> /home/hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb:720:in `status'
> /home/hbase/trunk/hbase-shell/src/test/ruby/test_helper.rb:130:in 
> `replication_status'
> ./src/test/ruby/hbase/admin_test.rb:419:in `test_Get_replication_status'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14933) check_compatibility.sh does not work with jdk8

2016-07-11 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14933:

Fix Version/s: (was: 2.0.0)

> check_compatibility.sh does not work with jdk8
> --
>
> Key: HBASE-14933
> URL: https://issues.apache.org/jira/browse/HBASE-14933
> Project: HBase
>  Issue Type: Task
>  Components: scripts
>Reporter: Nick Dimiduk
>Assignee: Dima Spivak
>Priority: Minor
>
> Specifically, Oracle jdk1.8.0_65 on OSX and OpenJDk 1.8.0_45-internal-b14 on 
> ubuntu.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372168#comment-15372168
 ] 

Anoop Sam John commented on HBASE-15643:


LruBlockCache 
{code}
 stats.incrementBlockCountForTable(cacheKey.getTableName());
366 stats.incrementBlockSizeForTable(cacheKey.getTableName(), 
cb.heapSize());
{code}
L1/L2 cache look up and read is in hot path and there adding code like above 
which involve a Map lookup operation will be perf hit. That is what the comment 
means.

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16194) Should count in MSLAB chunk allocation into heap size change when adding duplicate cells

2016-07-11 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-16194:
--
Attachment: HBASE-16194.branch-0.98.v2.patch

0.98 patch v2 resolves the javadoc issue reported by HadoopQA

> Should count in MSLAB chunk allocation into heap size change when adding 
> duplicate cells
> 
>
> Key: HBASE-16194
> URL: https://issues.apache.org/jira/browse/HBASE-16194
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.0, 1.1.6, 1.2.3
>
> Attachments: HBASE-16194.branch-0.98.patch, 
> HBASE-16194.branch-0.98.v2.patch, HBASE-16194.branch-1.patch, 
> HBASE-16194.branch-1.patch, HBASE-16194.branch-1.v2.patch, HBASE-16194.patch, 
> HBASE-16194_v2.patch
>
>
> See more details about problem description and analysis in HBASE-16193



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16207) can't restore snapshot without "Admin" permission

2016-07-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372159#comment-15372159
 ] 

Jerry He commented on HBASE-16207:
--

+1

> can't restore snapshot without "Admin" permission
> -
>
> Key: HBASE-16207
> URL: https://issues.apache.org/jira/browse/HBASE-16207
> Project: HBase
>  Issue Type: Bug
>  Components: master, snapshots
>Affects Versions: 2.0.0, 1.3.0, 1.2.1, 1.1.5
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0, 1.2.2, 1.1.6, 1.3.1
>
> Attachments: HBASE-16207-v0.patch, HBASE-16207-v0_branch-1.patch
>
>
> MasterRpcServices.restoreSnapshot() tries to verify if the NS exists before 
> starting the restore, but instead of calling ensureNamespaceExists() it calls 
> master.getNamespace() which requires ADMIN permission to get the NS 
> descriptor. 
> {code}
> public RestoreSnapshotResponse restoreSnapshot(RpcController controller,
> ...
>   // Ensure namespace exists. Will throw exception if non-known NS.
>   master.getNamespace(dstTable.getNamespaceAsString());
> {code}
> unfortunately i'm not aware of any unit-test that cover this kind of 
> situations. we cover single ACLs from the TestAccessController but we don't 
> exercise rpc calls and verify if there is more than one check on the ACLs 
> like in this case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16081:

Comment: was deleted

(was: Please set fix version(s))

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372157#comment-15372157
 ] 

Vladimir Rodionov commented on HBASE-16212:
---

This is HDFS client logging. 

> Many Connections are created by wrong seeking pos on InputStream
> 
>
> Key: HBASE-16212
> URL: https://issues.apache.org/jira/browse/HBASE-16212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Zhihua Deng
>
> As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
> is suffering from logging the same repeatedly. Adding log to DFSInputStream, 
> it outputs as follows:
> 2016-07-10 21:31:42,147 INFO  
> [B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
> DFSClient_NONMAPREDUCE_1984924661_1 seek 
> DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
>  for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
> 111506876, targetPos: 111506843
>  ...
> As the pos of this input stream is larger than targetPos(the pos trying to 
> seek), A new connection to the datanode will be created, the older one will 
> be closed as a consequence. When the wrong seeking ops are large, the 
> datanode's block scanner info message is spamming logs, as well as many 
> connections to the same datanode will be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372158#comment-15372158
 ] 

Sean Busbey commented on HBASE-16081:
-

Please set fix version(s)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16081:

Fix Version/s: 1.4.0
   1.3.0

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16081:

Fix Version/s: 2.0.0

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372133#comment-15372133
 ] 

Zhihua Deng edited comment on HBASE-16212 at 7/12/16 3:44 AM:
--

Some related logs from region server:

2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
117532235, targetPos: 117532202
2016-07-10 21:31:43,988 INFO  [B.defaultRpcServer.handler=7,queue=1,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
119567867, targetPos: 119567834
2016-07-10 21:31:45,228 INFO  
[B.defaultRpcServer.handler=19,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
121787264, targetPos: 121787231
2016-07-10 21:31:45,254 INFO  [B.defaultRpcServer.handler=0,queue=0,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
123910032, targetPos: 12390
2016-07-10 21:31:46,402 INFO  
[B.defaultRpcServer.handler=26,queue=2,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
125905163, targetPos: 125905130
2016-07-10 21:31:47,027 INFO  
[B.defaultRpcServer.handler=24,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
128028947, targetPos: 128028914
2016-07-10 21:31:47,649 INFO  
[B.defaultRpcServer.handler=10,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
130057763, targetPos: 130057730



was (Author: dengzh):
Some related logs from region server:

2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 

[jira] [Comment Edited] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372133#comment-15372133
 ] 

Zhihua Deng edited comment on HBASE-16212 at 7/12/16 3:44 AM:
--

Some related logs from region server:
2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
117532235, targetPos: 117532202
2016-07-10 21:31:43,988 INFO  [B.defaultRpcServer.handler=7,queue=1,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
119567867, targetPos: 119567834
2016-07-10 21:31:45,228 INFO  
[B.defaultRpcServer.handler=19,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
121787264, targetPos: 121787231
2016-07-10 21:31:45,254 INFO  [B.defaultRpcServer.handler=0,queue=0,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
123910032, targetPos: 12390
2016-07-10 21:31:46,402 INFO  
[B.defaultRpcServer.handler=26,queue=2,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
125905163, targetPos: 125905130
2016-07-10 21:31:47,027 INFO  
[B.defaultRpcServer.handler=24,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
128028947, targetPos: 128028914
2016-07-10 21:31:47,649 INFO  
[B.defaultRpcServer.handler=10,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
130057763, targetPos: 130057730



was (Author: dengzh):
Some related logs from region server:

2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 f

[jira] [Commented] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372133#comment-15372133
 ] 

Zhihua Deng commented on HBASE-16212:
-

Some related logs from region server:

2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
117532235, targetPos: 117532202
2016-07-10 21:31:43,988 INFO  [B.defaultRpcServer.handler=7,queue=1,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
119567867, targetPos: 119567834
2016-07-10 21:31:45,228 INFO  
[B.defaultRpcServer.handler=19,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
121787264, targetPos: 121787231
2016-07-10 21:31:45,254 INFO  [B.defaultRpcServer.handler=0,queue=0,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
123910032, targetPos: 12390
2016-07-10 21:31:46,402 INFO  
[B.defaultRpcServer.handler=26,queue=2,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
125905163, targetPos: 125905130
2016-07-10 21:31:47,027 INFO  
[B.defaultRpcServer.handler=24,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
128028947, targetPos: 128028914
2016-07-10 21:31:47,649 INFO  
[B.defaultRpcServer.handler=10,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
130057763, targetPos: 130057730


> Many Connections are created by wrong seeking pos on InputStream
> 
>
> Key: HBASE-16212
> URL: https://issues.apache.org/jira/browse/HBASE-16212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Zhihua Deng
>
> As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
> is suffering from logging the same repeatedly. Adding log to DFSInputStream, 
> it outputs as follows:
> 2016-07-10 21:31:42,147 INFO  
> [B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
> DFSClient_NONMAPREDUCE_1984924661_1 seek 
> DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
>  for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
> 111506876, targetPos: 111506843
>  ...
> As the pos of this input stream is larger than targetPos(the pos trying to 
> seek), A new connection to the datanode will be created, the older one will 
> be closed as a consequence. When the wrong seeking ops are large, the 
> datanode's block scanner info message is spamming logs, as well as many 
> connections to the same datanode will be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372130#comment-15372130
 ] 

Zhihua Deng commented on HBASE-16212:
-

When the datanode writes to the closed connection, an exception will occur:
2016-06-30 11:21:34,320 TRACE org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(10.130.1.29:50010, 
datanodeUuid=f3d795cc-2b3b-43b9-90c3-e4157c031d2c, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-56;cid=CID-a99b693d-6f26-48fe-ad37-9f8162f70b22;nsid=920937379;c=0):Ignoring
 exception while serving 
BP-360285305-10.130.1.11-1444619256876:blk_1105510536_31776579 to 
/10.130.1.21:39933
java.net.SocketException: Original Exception : java.io.IOException: Connection 
reset by peer
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at 
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:427)
at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:492)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:607)
at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:579)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:759)
at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:706)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:551)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Connection reset by peer
... 13 more
By analyzing the regionserver's log,  pos - targetPos = 33, which is equal to 
the length of block header, seen from HFileBlock.java.

> Many Connections are created by wrong seeking pos on InputStream
> 
>
> Key: HBASE-16212
> URL: https://issues.apache.org/jira/browse/HBASE-16212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Zhihua Deng
>
> As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
> is suffering from logging the same repeatedly. Adding log to DFSInputStream, 
> it outputs as follows:
> 2016-07-10 21:31:42,147 INFO  
> [B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
> DFSClient_NONMAPREDUCE_1984924661_1 seek 
> DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
>  for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
> 111506876, targetPos: 111506843
>  ...
> As the pos of this input stream is larger than targetPos(the pos trying to 
> seek), A new connection to the datanode will be created, the older one will 
> be closed as a consequence. When the wrong seeking ops are large, the 
> datanode's block scanner info message is spamming logs, as well as many 
> connections to the same datanode will be created.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16110) AsyncFS WAL doesn't work with Hadoop 2.8+

2016-07-11 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-16110:
--
Attachment: HBASE-16110-v1.patch

> AsyncFS WAL doesn't work with Hadoop 2.8+
> -
>
> Key: HBASE-16110
> URL: https://issues.apache.org/jira/browse/HBASE-16110
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16110-v1.patch, HBASE-16110.patch
>
>
> The async wal implementation doesn't work with Hadoop 2.8+. Fails compilation 
> and will fail running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HBASE-16212:

Description: 
As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
is suffering from logging the same repeatedly. Adding log to DFSInputStream, it 
outputs as follows:

2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
 ...
As the pos of this input stream is larger than targetPos(the pos trying to 
seek), A new connection to the datanode will be created, the older one will be 
closed as a consequence. When the wrong seeking ops are large, the datanode's 
block scanner info message is spamming logs, as well as many connections to the 
same datanode will be created.



  was:
As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
is suffering from logging the same repeatedly. Adding log to DFSInputStream, it 
outputs as follows:

 2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
117532235, targetPos: 117532202
2016-07-10 21:31:43,988 INFO  [B.defaultRpcServer.handler=7,queue=1,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
119567867, targetPos: 119567834
2016-07-10 21:31:45,228 INFO  
[B.defaultRpcServer.handler=19,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
121787264, targetPos: 121787231
2016-07-10 21:31:45,254 INFO  [B.defaultRpcServer.handler=0,queue=0,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
123910032, targetPos: 12390
2016-07-10 21:31:46,402 INFO  
[B.defaultRpcServer.handler=26,queue=2,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
125905163, targetPos: 125905130
2016-07-10 21:31:47,027 INFO  
[B.defaultRpcServer.handler=24,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
128028947, targetPos: 128028914
2016-07-10 21:31:47,649 INFO  
[B.defaultRpcServer.handler=10,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
130057763, targetPos: 130057730



> Many Connections are created by wrong seeking pos on InputStream
> 
>
> Key: HBASE-16212
> URL: https://issues.apache.org/jira/browse/HBASE-16212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Zhihua Deng
>

[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372121#comment-15372121
 ] 

Hudson commented on HBASE-16081:


FAILURE: Integrated in HBase-Trunk_matrix #1212 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1212/])
HBASE-16081 Wait for Replication Tasks to complete before killing the (antonov: 
rev ccf293d7fbb2e73f454feb4d72d860ed46cc5115)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationEndpoint.java


> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HBASE-16212:

Description: 
As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
is suffering from logging the same repeatedly. Adding log to DFSInputStream, it 
outputs as follows:

 2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
117532235, targetPos: 117532202
2016-07-10 21:31:43,988 INFO  [B.defaultRpcServer.handler=7,queue=1,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
119567867, targetPos: 119567834
2016-07-10 21:31:45,228 INFO  
[B.defaultRpcServer.handler=19,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
121787264, targetPos: 121787231
2016-07-10 21:31:45,254 INFO  [B.defaultRpcServer.handler=0,queue=0,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
123910032, targetPos: 12390
2016-07-10 21:31:46,402 INFO  
[B.defaultRpcServer.handler=26,queue=2,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
125905163, targetPos: 125905130
2016-07-10 21:31:47,027 INFO  
[B.defaultRpcServer.handler=24,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
128028947, targetPos: 128028914
2016-07-10 21:31:47,649 INFO  
[B.defaultRpcServer.handler=10,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
130057763, targetPos: 130057730


  was:
As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
is suffering from logging the same repeatedly. Adding log to DFSInputStream, it 
outputs as follows:
 2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer

[jira] [Created] (HBASE-16212) Many Connections are created by wrong seeking pos on InputStream

2016-07-11 Thread Zhihua Deng (JIRA)
Zhihua Deng created HBASE-16212:
---

 Summary: Many Connections are created by wrong seeking pos on 
InputStream
 Key: HBASE-16212
 URL: https://issues.apache.org/jira/browse/HBASE-16212
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.2
Reporter: Zhihua Deng


As described in https://issues.apache.org/jira/browse/HDFS-8659, the datanode 
is suffering from logging the same repeatedly. Adding log to DFSInputStream, it 
outputs as follows:
 2016-07-10 21:31:42,147 INFO  
[B.defaultRpcServer.handler=22,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
111506876, targetPos: 111506843
2016-07-10 21:31:42,715 INFO  
[B.defaultRpcServer.handler=25,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
113544644, targetPos: 113544611
2016-07-10 21:31:43,341 INFO  
[B.defaultRpcServer.handler=27,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
115547269, targetPos: 115547236
2016-07-10 21:31:43,950 INFO  
[B.defaultRpcServer.handler=16,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
117532235, targetPos: 117532202
2016-07-10 21:31:43,988 INFO  [B.defaultRpcServer.handler=7,queue=1,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
119567867, targetPos: 119567834
2016-07-10 21:31:45,228 INFO  
[B.defaultRpcServer.handler=19,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
121787264, targetPos: 121787231
2016-07-10 21:31:45,254 INFO  [B.defaultRpcServer.handler=0,queue=0,port=16020] 
hdfs.DFSClient: DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
123910032, targetPos: 12390
2016-07-10 21:31:46,402 INFO  
[B.defaultRpcServer.handler=26,queue=2,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
125905163, targetPos: 125905130
2016-07-10 21:31:47,027 INFO  
[B.defaultRpcServer.handler=24,queue=0,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
128028947, targetPos: 128028914
2016-07-10 21:31:47,649 INFO  
[B.defaultRpcServer.handler=10,queue=1,port=16020] hdfs.DFSClient: 
DFSClient_NONMAPREDUCE_1984924661_1 seek 
DatanodeInfoWithStorage[10.130.1.29:50010,DS-086bc494-d862-470c-86e8-9cb7929985c6,DISK]
 for BP-360285305-10.130.1.11-1444619256876:blk_1109360829_35627143. pos: 
130057763, targetPos: 130057730



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14743) Add metrics around HeapMemoryManager

2016-07-11 Thread Reid Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-14743:
--
Attachment: HBASE-14743.011.patch

refactor the else block to enable "do nothing counter" any time

> Add metrics around HeapMemoryManager
> 
>
> Key: HBASE-14743
> URL: https://issues.apache.org/jira/browse/HBASE-14743
> Project: HBase
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-14743.009.patch, HBASE-14743.009.rw3.patch, 
> HBASE-14743.009.v2.patch, HBASE-14743.010.patch, HBASE-14743.010.v2.patch, 
> HBASE-14743.011.patch, Metrics snapshot 2016-6-30.png, Screen Shot 2016-06-16 
> at 5.39.13 PM.png, test2_1.png, test2_2.png, test2_3.png, test2_4.png
>
>
> it would be good to know how many invocations there have been.
> How many decided to expand memstore.
> How many decided to expand block cache.
> How many decided to do nothing.
> etc.
> When that's done use those metrics to clean up the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16194) Should count in MSLAB chunk allocation into heap size change when adding duplicate cells

2016-07-11 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372110#comment-15372110
 ] 

Yu Li commented on HBASE-16194:
---

Almost sir, the only work left is to push the commit to 0.98, let me get this 
done today.

> Should count in MSLAB chunk allocation into heap size change when adding 
> duplicate cells
> 
>
> Key: HBASE-16194
> URL: https://issues.apache.org/jira/browse/HBASE-16194
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.0, 1.1.6, 1.2.3
>
> Attachments: HBASE-16194.branch-0.98.patch, 
> HBASE-16194.branch-1.patch, HBASE-16194.branch-1.patch, 
> HBASE-16194.branch-1.v2.patch, HBASE-16194.patch, HBASE-16194_v2.patch
>
>
> See more details about problem description and analysis in HBASE-16193



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372109#comment-15372109
 ] 

Hudson commented on HBASE-16081:


SUCCESS: Integrated in HBase-1.3-IT #750 (See 
[https://builds.apache.org/job/HBase-1.3-IT/750/])
HBASE-16081 Wait for Replication Tasks to complete before killing the (antonov: 
rev 0fda2bc9e7cbd58d4e67d0e9dcc420bc7ea98eab)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/HBaseInterClusterReplicationEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/ReplicationEndpoint.java


> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372102#comment-15372102
 ] 

Hadoop QA commented on HBASE-16210:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 12s 
{color} | {color:red} hbase-common generated 4 new + 0 unchanged - 0 fixed = 4 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 9s {color} | 
{color:red} hbase-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 6s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 54s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
44s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 6s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-common |
|  |  Public static org.apache.hadoop.hbase.Timestamp.values() may expose 
internal representation by returning Timestamp.values  At 
Timestamp.java:internal representation by returning Timestamp.value

[jira] [Updated] (HBASE-16055) PutSortReducer loses any Visibility/acl attribute set on the Puts

2016-07-11 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-16055:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 0.98.21
   Status: Resolved  (was: Patch Available)

> PutSortReducer loses any Visibility/acl attribute set on the Puts 
> --
>
> Key: HBASE-16055
> URL: https://issues.apache.org/jira/browse/HBASE-16055
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 0.98.21
>
> Attachments: HBASE-16055_0.98_2.patch, HBASE-16055_1.patch, 
> HBASE-16055_2.patch
>
>
> Based on a user discussion, I think as the user pointed out rightly, when a 
> PutSortReducer is used any visibility attribute or external attribute set on 
> the Put will be lost as we create KVs out of the cells in the puts whereas 
> the ACL and visibility are all set as Attributes. 
> In TextSortReducer we tend to read the information we tend to read the 
> information from the parsed line but here in PutSortReducer we don't do it. I 
> think this problem should be in all the existing versions where we support 
> Tags. Correct me if am wrong here. 
> [~anoop.hbase], [~andrew.purt...@gmail.com]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16055) PutSortReducer loses any Visibility/acl attribute set on the Puts

2016-07-11 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372100#comment-15372100
 ] 

ramkrishna.s.vasudevan commented on HBASE-16055:


Test failures seems unrelated. The newly added test passes in the build.

> PutSortReducer loses any Visibility/acl attribute set on the Puts 
> --
>
> Key: HBASE-16055
> URL: https://issues.apache.org/jira/browse/HBASE-16055
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0, 1.0.4, 1.4.0, 0.98.21
>
> Attachments: HBASE-16055_0.98_2.patch, HBASE-16055_1.patch, 
> HBASE-16055_2.patch
>
>
> Based on a user discussion, I think as the user pointed out rightly, when a 
> PutSortReducer is used any visibility attribute or external attribute set on 
> the Put will be lost as we create KVs out of the cells in the puts whereas 
> the ACL and visibility are all set as Attributes. 
> In TextSortReducer we tend to read the information we tend to read the 
> information from the parsed line but here in PutSortReducer we don't do it. I 
> think this problem should be in all the existing versions where we support 
> Tags. Correct me if am wrong here. 
> [~anoop.hbase], [~andrew.purt...@gmail.com]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372082#comment-15372082
 ] 

Hadoop QA commented on HBASE-16081:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 112m 29s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
| Timed out junit tests | 
org.apache.hadoop.hbase.mapreduce.TestTableInputFormat |
|   | org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
|   | org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles |
|   | org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat |
|   | org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817295/HBASE-16081-v6.patch |
| JIRA Issue | HBASE-16081 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  co

[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool

2016-07-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372079#comment-15372079
 ] 

Andrew Purtell commented on HBASE-16095:


Well, it's not really similar, there are three groups of fixed priority as you 
mention (smile), as opposed to, potentially, more. 

Let me clarify my 'This makes sense IMHO.' above as a +1. The approach here 
follows current practice with an incremental change. I would like to see us 
move to a different approach but that is not the problem you want to solve and 
I won't insist on Y for X.


> Add priority to TableDescriptor and priority region open thread pool
> 
>
> Key: HBASE-16095
> URL: https://issues.apache.org/jira/browse/HBASE-16095
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-16095_v0.patch, hbase-16095_v1.patch, 
> hbase-16095_v2.patch
>
>
> This is in the similar area with HBASE-15816, and also required with the 
> current secondary indexing for Phoenix. 
> The problem with P secondary indexes is that data table regions depend on 
> index regions to be able to make progress. Possible distributed deadlocks can 
> be prevented via custom RpcScheduler + RpcController configuration via 
> HBASE-11048 and PHOENIX-938. However, region opening also has the same 
> deadlock situation, because data region open has to replay the WAL edits to 
> the index regions. There is only 1 thread pool to open regions with 3 workers 
> by default. So if the cluster is recovering / restarting from scratch, the 
> deadlock happens because some index regions cannot be opened due to them 
> being in the same queue waiting for data regions to open (which waits for  
> RPC'ing to index regions which is not open). This is reproduced in almost all 
> Phoenix secondary index clusters (mutable table w/o transactions) that we 
> see. 
> The proposal is to have a "high priority" region opening thread pool, and 
> have the HTD carry the relative priority of a table. This maybe useful for 
> other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they 
> want some specific tables to become online faster. 
> As a follow up patch, we can also take a look at how this priority 
> information can be used by the rpc scheduler on the server side or rpc 
> controller on the client side, so that we do not have to set priorities 
> manually per-operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372050#comment-15372050
 ] 

Hadoop QA commented on HBASE-15643:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 37 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 37s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 13s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 7s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 2m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 11s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 29s 
{color} | {color:red} hbase-server generated 5 new + 0 unchanged - 0 fixed = 5 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color}

[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool

2016-07-11 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372046#comment-15372046
 ] 

Enis Soztutar commented on HBASE-16095:
---

bq. In general we are approaching this in an ad hoc manner with static pools, 
predefined and limited QoS levels, and magic constants.
Agreed, this is not the cleanest approach. I've thought about doing an 
unlimited thread pool cache, but there is no out-of-the-box thread pool that 
does what we want in the executor service. We would like something like keep a 
thread pool cache with max size N, and have a unbounded linked queue to queue 
up events. 
bq. Use the same approach for dispatch to META: META handlers would become just 
a pool allocated at the highest priority level. The downside is more work at 
dispatch than simple test-and-branch with precompiled constants. Just a thought.
This is similar, except for one-per-priority, there is three groups of fixed 
priority. 

> Add priority to TableDescriptor and priority region open thread pool
> 
>
> Key: HBASE-16095
> URL: https://issues.apache.org/jira/browse/HBASE-16095
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-16095_v0.patch, hbase-16095_v1.patch, 
> hbase-16095_v2.patch
>
>
> This is in the similar area with HBASE-15816, and also required with the 
> current secondary indexing for Phoenix. 
> The problem with P secondary indexes is that data table regions depend on 
> index regions to be able to make progress. Possible distributed deadlocks can 
> be prevented via custom RpcScheduler + RpcController configuration via 
> HBASE-11048 and PHOENIX-938. However, region opening also has the same 
> deadlock situation, because data region open has to replay the WAL edits to 
> the index regions. There is only 1 thread pool to open regions with 3 workers 
> by default. So if the cluster is recovering / restarting from scratch, the 
> deadlock happens because some index regions cannot be opened due to them 
> being in the same queue waiting for data regions to open (which waits for  
> RPC'ing to index regions which is not open). This is reproduced in almost all 
> Phoenix secondary index clusters (mutable table w/o transactions) that we 
> see. 
> The proposal is to have a "high priority" region opening thread pool, and 
> have the HTD carry the relative priority of a table. This maybe useful for 
> other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they 
> want some specific tables to become online faster. 
> As a follow up patch, we can also take a look at how this priority 
> information can be used by the rpc scheduler on the server side or rpc 
> controller on the client side, so that we do not have to set priorities 
> manually per-operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-16081:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15372023#comment-15372023
 ] 

Mikhail Antonov commented on HBASE-16081:
-

Pushed to master, branch-1 and branch-1.3.  Thanks for the patch!  Test 
failures appear to be unrelated, but will watch for the builds on stable 
branches.

cc [~busbey]

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371998#comment-15371998
 ] 

Joseph commented on HBASE-16081:


I can neither recreate TestMasterReplication nor 
TestMasterFailoverWithProcedures on my personal laptop. Checking the Unit Test 
logs, these tests seem to have only failed because of timeout. What do you 
think [~eclark], [~ashu210890]?

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16176) Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-07-11 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371996#comment-15371996
 ] 

Andrew Purtell commented on HBASE-16176:


Thanks [~mantonov]. I reverted HBASE-15650 in 0.98.21-SNAPSHOT and plan to put 
it back with HBASE-16175

It turns out we didn't put both changes into 0.98 needed to trigger the bug.

> Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of 
> contention when many threads reading a StoreFile
> --
>
> Key: HBASE-16176
> URL: https://issues.apache.org/jira/browse/HBASE-16176
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance
>Affects Versions: 1.3.0, 0.98.20
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16176.branch-1.3.001.patch, 
> HBASE-16176.branch-1.3.002.patch, HBASE-16176.branch-1.3.002.patch, 
> HBASE-16176.branch-1.3.003.patch, HBASE-16176.master.001.patch
>
>
> Debugging the parent issue, came up with some improvements on old HBASE-15650 
> "Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile". Lets get them in. Here are the changes:
> {code}
>   6  Change HFile Writer constructor so we pass in the TimeRangeTracker, 
> if one,
>   7  on construction rather than set later (the flag and reference were 
> not
>   8  volatile so could have made for issues in concurrent case) 2. Make 
> sure the
>   9  construction of a TimeRange from a TimeRangeTracer on open of an 
> HFile Reader
>  10  never makes a bad minimum value, one that would preclude us reading 
> any
>  11  values from a file (add a log and set min to 0)
>  12 M hbase-common/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
>  13  Call through to next constructor (if minStamp was 0, we'd skip 
> setting allTime=true)
>  14 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
>  15  Add constructor override that takes a TimeRangeTracker (set when 
> flushing but
>  16  not when compacting)
>  17 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
>  18  Add override creating an HFile in tmp that takes a TimeRangeTracker
>  19 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
>  20  Add override for HFile Writer that takes a TimeRangeTracker
>  21  Take it on construction instead of having it passed by a setter 
> later (flags
>  22  and reference set by the setter were not volatile... could have been 
> prob
>  23  in concurrent case)
>  24 M 
> hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
>  25  Log WARN if bad initial TimeRange value (and then 'fix' it)
>  26 M 
> hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java
>  27  A few tests to prove serialization works as expected and that we'll 
> get a bad min if
>  28  not constructed properly.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16092) Procedure v2 - complete child procedure support

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371986#comment-15371986
 ] 

Hadoop QA commented on HBASE-16092:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 13s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:bl

[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371982#comment-15371982
 ] 

Hadoop QA commented on HBASE-16081:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 22s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 167m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.security.token.TestGenerateDelegationToken |
|   | hadoop.hbase.replication.TestMasterReplication |
|   | hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
|   | hadoop.hbase.regionserver.TestRegionMergeTransactionOnCluster |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer |
|   | org.apache.hadoop.hbase.mapred.TestMultiTableSnapshotInputFormat |
|   | org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817257/HBASE-16081-v3.pa

[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-16081:
--
Release Note: When a replication endpoint is sent a shutdown request by the 
replication source in situations like removing a peer, we now try to gracefully 
shut it down by draining the items already sent for replication to the peer 
cluster. If the drain does not complete in the specified time 
(hbase.rpc.timeout * replication.source.maxterminationmultiplier), the 
regionserver is aborted to avoid blocking the WAL roll.

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371953#comment-15371953
 ] 

Alicia Ying Shu commented on HBASE-15643:
-

[~eclark] Thanks for reviewing. We passed down TableName along the calls so 
that we can use it as a key when collecting blockCache metrics  into a map in 
CacheStats.java. What other places you would like a map? Thanks. 

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-11 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Status: Patch Available  (was: In Progress)

> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this patch/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Suggestions are welcome. 
> [~enis] - The timestamp class is the one written by you, I have not made any 
> changes. I just changed the default timestamp of the table to Custom. 
> [~j...@cloudera.com], [~stack]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-11 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Description: 
This is a sub-issue of 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is a 
small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. The 
main idea of HLC is described in 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with the 
motivation of adding it to HBase. 

What is this patch/issue about ?
This issue attempts to add a timestamp class to hbase-common and timestamp type 
to HTable. 
This is a part of the attempt to get HLC into HBase. This patch does not 
interfere with the current working of HBase.

Why Timestamp Class ?
Timestamp class can be as an abstraction to represent time in Hbase in 64 bits. 
It is just used for manipulating with the 64 bits of the timestamp and is not 
concerned about the actual time.
There are three types of timestamps. System time, Custom and HLC. Each one of 
it has methods to manipulate the 64 bits of timestamp. 

HTable changes: Added a timestamp type property to HTable. This will help HBase 
exist in conjunction with old type of timestamp and also the HLC which will be 
introduced. The default is set to custom timestamp(current way of usage of 
timestamp). default unset timestamp is also custom timestamp as it should be 
so. The default timestamp will be changed to HLC when HLC feature is introduced 
completely in HBase.

Suggestions are welcome. 

[~enis] - The timestamp class is the one written by you, I have not made any 
changes. I just changed the default timestamp of the table to Custom. 
[~j...@cloudera.com], [~stack]

  was:
This is a sub-issue of 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is a 
small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. The 
main idea of HLC is described in 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with the 
motivation of adding it to HBase. 

What is this patch/issue about ?
This issue attempts to add a timestamp class to hbase-common and timestamp type 
to HTable. 
This is a part of the attempt to get HLC into HBase. This patch does not 
interfere with the current working of HBase.

Why Timestamp Class ?
Timestamp class can be as an abstraction to represent time in Hbase in 64 bits. 
It is just used for manipulating with the 64 bits of the timestamp and is not 
concerned about the actual time.
There are three types of timestamps. System time, Custom and HLC. Each one of 
it has methods to manipulate the 64 bits of timestamp. 

HTable changes: Added a timestamp type property to HTable. This will help HBase 
exist in conjunction with old type of timestamp and also the HLC which will be 
introduced. The default is set to custom timestamp(current way of usage of 
timestamp). default unset timestamp is also custom timestamp as it should be 
so. The default timestamp will be changed to HLC when HLC feature is introduced 
completely in HBase.

Suggestions are welcome. 

@stack
[~enis]



> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this patch/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default 

[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-11 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Description: 
This is a sub-issue of 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is a 
small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. The 
main idea of HLC is described in 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with the 
motivation of adding it to HBase. 

What is this patch/issue about ?
This issue attempts to add a timestamp class to hbase-common and timestamp type 
to HTable. 
This is a part of the attempt to get HLC into HBase. This patch does not 
interfere with the current working of HBase.

Why Timestamp Class ?
Timestamp class can be as an abstraction to represent time in Hbase in 64 bits. 
It is just used for manipulating with the 64 bits of the timestamp and is not 
concerned about the actual time.
There are three types of timestamps. System time, Custom and HLC. Each one of 
it has methods to manipulate the 64 bits of timestamp. 

HTable changes: Added a timestamp type property to HTable. This will help HBase 
exist in conjunction with old type of timestamp and also the HLC which will be 
introduced. The default is set to custom timestamp(current way of usage of 
timestamp). default unset timestamp is also custom timestamp as it should be 
so. The default timestamp will be changed to HLC when HLC feature is introduced 
completely in HBase.

Suggestions are welcome. 

@stack
[~enis]


  was:
This is a sub-issue of 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is a 
small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. The 
main idea of HLC is described in 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with the 
motivation of adding it to HBase. 

What is this patch/issue about ?
This issue attempts to add a timestamp class to hbase-common and timestamp type 
to HTable. 
This is a part of the attempt to get HLC into HBase. This patch does not 
interfere with the current working of HBase.

Why Timestamp Class ?
Timestamp class can be as an abstraction to represent time in Hbase in 64 bits. 
It is just used for manipulating with the 64 bits of the timestamp and is not 
concerned about the actual time.
There are three types of timestamps. System time, Custom and HLC. Each one of 
it has methods to manipulate the 64 bits of timestamp. 

HTable changes: Added a timestamp type property to HTable. This will help HBase 
exist in conjunction with old type of timestamp and also the HLC which will be 
introduced. The default is set to custom timestamp(current way of usage of 
timestamp). default unset timestamp is also custom timestamp as it should be 
so. The default timestamp will be changed to HLC when HLC feature is introduced 
completely in HBase.

Suggestions are welcome. 



> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this patch/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Suggestions are welcome. 
> @stack
> [~enis]



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HBASE-16183) Correct errors in example program of coprocessor in Ref Guide

2016-07-11 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371937#comment-15371937
 ] 

Jerry He commented on HBASE-16183:
--

Hi, [~waterlx]
You were added to the contributor list previously.  Did you change your jira id 
recently?

> Correct errors in example program of coprocessor in Ref Guide
> -
>
> Key: HBASE-16183
> URL: https://issues.apache.org/jira/browse/HBASE-16183
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiang Li
>Assignee: Xiang Li
>Priority: Minor
> Attachments: HBASE-16183.patch
>
>
> 1. In Section 89.3.3
> change
> {code}
> String path = "hdfs://:/user//coprocessor.jar";
> {code}
> into
> {code}
> Path path = new 
> Path("hdfs://bdavm1506.svl.ibm.com:8020/user/hbase/coprocessor.jar");
> {code}
> Reason:
>   The second parameter of HTableDescriptor.addCoprocessor() is 
> org.apache.hadoop.fs.Path, not String.
>   See 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/HTableDescriptor.html
> 2. In Section 89.3.3
> change
> {code}
> HBaseAdmin admin = new HBaseAdmin(conf);
> {code}
> into
> {code}
> Connection connection = ConnectionFactory.createConnection(conf);
> Admin admin = connection.getAdmin();
> {code}
> Reason:
>   HBASE-12083 makes new HBaseAdmin() deprecated and the instance of Admin is 
> supposed to get from Connection.getAdmin()
>   Also see 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html
> 3. In section 90.1
> change
> {code}
> public void preGetOp(final ObserverContext e, final Get get, final List 
> results)
> {code}
> into
> {code}
> public void preGetOp(final ObserverContext e, 
> final Get get, final List results)
> {code}
> change
> {code}
> List kvs = new ArrayList(results.size());
> {code}
> into
> {code}
> List kvs = new ArrayList(results.size());
> {code}
> change
> {code}
> public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
> {code}
> into
> {code}
> preScannerOpen(final ObserverContext e, final 
> Scan scan,
> {code}
> change
> {code}
> public boolean postScannerNext(final ObserverContext e, final InternalScanner 
> s, final List results, final int limit, final boolean hasMore) throws 
> IOException {
> {code}
> into
> {code}
> public boolean postScannerNext(final 
> ObserverContext e, final InternalScanner s, 
> final List results, final int limit, final boolean hasMore) throws 
> IOException {
> {code}
> change
> {code}
> Iterator iterator = results.iterator();
> {code}
> into
> {code}
> Iterator iterator = results.iterator();
> {code}
> Reason:
>   Generic
> 4. In section 90.1
> change
> {code}
> preGet(e, get, kvs);
> {code}
> into
> {code}
> super.preGetOp(e, get, kvs);
> {code}
> Reason:
>   There is not a function called preGet() provided by BaseRegionObserver or 
> its super class/interface. I believe we need to call preGetOp() of the super 
> class of RegionObserverExample here.
> 5. In section 90.1
> change
> {code}
> kvs.add(KeyValueUtil.ensureKeyValue(c));
> {code}
> into
> {code}
> kvs.add(c);
> {code}
> Reason:
>   KeyValueUtil.ensureKeyValue() is deprecated.
>   See 
> http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/KeyValueUtil.html
>   and https://issues.apache.org/jira/browse/HBASE-12079



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371936#comment-15371936
 ] 

Ashu Pachauri commented on HBASE-16081:
---

+1 on v6.

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371932#comment-15371932
 ] 

Hudson commented on HBASE-11625:


FAILURE: Integrated in HBase-1.1-JDK7 #1745 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1745/])
HBASE-11625 - Verifies data before building HFileBlock. - Adds (appy: rev 
79b77e3542a93d362c5565c98ce8d8e1f0044337)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java


> Reading datablock throws "Invalid HFile block magic" and can not switch to 
> hdfs checksum 
> -
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.21, 0.98.4, 0.98.5, 1.0.1.1, 1.0.3
>Reporter: qian wang
>Assignee: Appy
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz, 
> HBASE-11625-branch-1-v1.patch, HBASE-11625-branch-1.2-v1.patch, 
> HBASE-11625-branch-1.2-v2.patch, HBASE-11625-branch-1.2-v3.patch, 
> HBASE-11625-branch-1.2-v4.patch, HBASE-11625-master-v2.patch, 
> HBASE-11625-master-v3.patch, HBASE-11625-master.patch, 
> HBASE-11625.branch-1.1.001.patch, HBASE-11625.patch, correct-hfile, 
> corrupted-header-hfile
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it 
> could happen file corruption but it only can switch to hdfs checksum 
> inputstream till validateBlockChecksum(). If the datablock's header corrupted 
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" 
> and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Status: Patch Available  (was: Open)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: HBASE-16081-v6.patch

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16092) Procedure v2 - complete child procedure support

2016-07-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16092:

Attachment: HBASE-16092-v2.patch

v2 adds the missing method ProcedureTestingUtil.waitProcedure() that QA was 
complaining about for v1

> Procedure v2 - complete child procedure support
> ---
>
> Key: HBASE-16092
> URL: https://issues.apache.org/jira/browse/HBASE-16092
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16092-v0.patch, HBASE-16092-v1.patch, 
> HBASE-16092-v2.patch
>
>
> There was a missing part on the child procedure tracking.
> child procedure were never deleted from the wal on parent completion,
> leading to failures on startup. (no code uses child procedures yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: (was: HBASE-16081-v6.patch)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: HBASE-16081-v6.patch

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081-v6.patch, 
> HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16160) Get the UnsupportedOperationException when using delegation token with encryption

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371903#comment-15371903
 ] 

Hudson commented on HBASE-16160:


SUCCESS: Integrated in HBase-1.3 #777 (See 
[https://builds.apache.org/job/HBase-1.3/777/])
HBASE-16160 Support RPC encryption with direct ByteBuffers (Colin Ma via 
(garyh: rev 00c91b01ea0d89810541e0bef3152605519f9f70)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestDelegationTokenWithEncryption.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestGenerateDelegationToken.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/SecureTestCluster.java


> Get the UnsupportedOperationException when using delegation token with 
> encryption
> -
>
> Key: HBASE-16160
> URL: https://issues.apache.org/jira/browse/HBASE-16160
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16160.001.patch, HBASE-16160.002.patch, 
> HBASE-16160.003.patch
>
>
> Using delegation token with encryption, when do the Put operation, will get 
> the following exception:
> [RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=48345] 
> ipc.CallRunner(161): 
> RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=48345: caught: 
> java.lang.UnsupportedOperationException
> at java.nio.ByteBuffer.array(ByteBuffer.java:959)
> at 
> org.apache.hadoop.hbase.ipc.BufferChain.getBytes(BufferChain.java:66)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Call.wrapWithSasl(RpcServer.java:547)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Call.setResponse(RpcServer.java:467)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16092) Procedure v2 - complete child procedure support

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371900#comment-15371900
 ] 

Hadoop QA commented on HBASE-16092:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
35s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s 
{color} | {color:red} hbase-procedure in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 17s 
{color} | {color:red} hbase-procedure in the patch failed with JDK v1.8.0. 
{color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 17s {color} | 
{color:red} hbase-procedure in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 17s {color} 
| {color:red} hbase-procedure in the patch failed with JDK v1.8.0. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 13s 
{color} | {color:red} hbase-procedure in the patch failed with JDK v1.7.0_80. 
{color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 13s {color} | 
{color:red} hbase-procedure in the patch failed with JDK v1.7.0_80. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 13s {color} 
| {color:red} hbase-procedure in the patch failed with JDK v1.7.0_80. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 50s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 24s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 59s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 34s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 9s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 44s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 22s 
{color} | {color:red} Patch causes 32 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {colo

[jira] [Commented] (HBASE-16194) Should count in MSLAB chunk allocation into heap size change when adding duplicate cells

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371890#comment-15371890
 ] 

Hudson commented on HBASE-16194:


FAILURE: Integrated in HBase-1.1-JDK8 #1831 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1831/])
HBASE-16194 Should count in MSLAB chunk allocation into heap size change (liyu: 
rev 25c7dee21e7060f3be586b05539ae5954c0a)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemStoreLAB.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java


> Should count in MSLAB chunk allocation into heap size change when adding 
> duplicate cells
> 
>
> Key: HBASE-16194
> URL: https://issues.apache.org/jira/browse/HBASE-16194
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 1.2.1, 1.1.5, 0.98.20
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.0, 1.1.6, 1.2.3
>
> Attachments: HBASE-16194.branch-0.98.patch, 
> HBASE-16194.branch-1.patch, HBASE-16194.branch-1.patch, 
> HBASE-16194.branch-1.v2.patch, HBASE-16194.patch, HBASE-16194_v2.patch
>
>
> See more details about problem description and analysis in HBASE-16193



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11625) Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371889#comment-15371889
 ] 

Hudson commented on HBASE-11625:


FAILURE: Integrated in HBase-1.1-JDK8 #1831 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1831/])
HBASE-11625 - Verifies data before building HFileBlock. - Adds (appy: rev 
79b77e3542a93d362c5565c98ce8d8e1f0044337)
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java


> Reading datablock throws "Invalid HFile block magic" and can not switch to 
> hdfs checksum 
> -
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.94.21, 0.98.4, 0.98.5, 1.0.1.1, 1.0.3
>Reporter: qian wang
>Assignee: Appy
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6
>
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz, 
> HBASE-11625-branch-1-v1.patch, HBASE-11625-branch-1.2-v1.patch, 
> HBASE-11625-branch-1.2-v2.patch, HBASE-11625-branch-1.2-v3.patch, 
> HBASE-11625-branch-1.2-v4.patch, HBASE-11625-master-v2.patch, 
> HBASE-11625-master-v3.patch, HBASE-11625-master.patch, 
> HBASE-11625.branch-1.1.001.patch, HBASE-11625.patch, correct-hfile, 
> corrupted-header-hfile
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it 
> could happen file corruption but it only can switch to hdfs checksum 
> inputstream till validateBlockChecksum(). If the datablock's header corrupted 
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic" 
> and the rpc call fail



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-14933) check_compatibility.sh does not work with jdk8

2016-07-11 Thread Dima Spivak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dima Spivak resolved HBASE-14933.
-
Resolution: Cannot Reproduce

The problem was probably not related to JDK8 but to the underlying API change 
in how Java API Compliance Checker handled our list of interface annotations.

> check_compatibility.sh does not work with jdk8
> --
>
> Key: HBASE-14933
> URL: https://issues.apache.org/jira/browse/HBASE-14933
> Project: HBase
>  Issue Type: Task
>  Components: scripts
>Reporter: Nick Dimiduk
>Assignee: Dima Spivak
>Priority: Minor
> Fix For: 2.0.0
>
>
> Specifically, Oracle jdk1.8.0_65 on OSX and OpenJDk 1.8.0_45-internal-b14 on 
> ubuntu.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-14933) check_compatibility.sh does not work with jdk8

2016-07-11 Thread Dima Spivak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dima Spivak reopened HBASE-14933:
-

> check_compatibility.sh does not work with jdk8
> --
>
> Key: HBASE-14933
> URL: https://issues.apache.org/jira/browse/HBASE-14933
> Project: HBase
>  Issue Type: Task
>  Components: scripts
>Reporter: Nick Dimiduk
>Assignee: Dima Spivak
>Priority: Minor
> Fix For: 2.0.0
>
>
> Specifically, Oracle jdk1.8.0_65 on OSX and OpenJDk 1.8.0_45-internal-b14 on 
> ubuntu.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16160) Get the UnsupportedOperationException when using delegation token with encryption

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371875#comment-15371875
 ] 

Hudson commented on HBASE-16160:


FAILURE: Integrated in HBase-Trunk_matrix #1211 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1211/])
HBASE-16160 Support RPC encryption with direct ByteBuffers (Colin Ma via 
(garyh: rev 3b5fbf8d73b0ee50a0b04b044f93c1af39e5d79d)
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/BufferChain.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestDelegationTokenWithEncryption.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/SecureTestCluster.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestGenerateDelegationToken.java


> Get the UnsupportedOperationException when using delegation token with 
> encryption
> -
>
> Key: HBASE-16160
> URL: https://issues.apache.org/jira/browse/HBASE-16160
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Blocker
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-16160.001.patch, HBASE-16160.002.patch, 
> HBASE-16160.003.patch
>
>
> Using delegation token with encryption, when do the Put operation, will get 
> the following exception:
> [RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=48345] 
> ipc.CallRunner(161): 
> RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=48345: caught: 
> java.lang.UnsupportedOperationException
> at java.nio.ByteBuffer.array(ByteBuffer.java:959)
> at 
> org.apache.hadoop.hbase.ipc.BufferChain.getBytes(BufferChain.java:66)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Call.wrapWithSasl(RpcServer.java:547)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Call.setResponse(RpcServer.java:467)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:140)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:189)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:169)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16044) Fix 'hbase shell' output parsing in graceful_stop.sh

2016-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371874#comment-15371874
 ] 

Hudson commented on HBASE-16044:


FAILURE: Integrated in HBase-Trunk_matrix #1211 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1211/])
HBASE-16044 Fix 'hbase shell' output parsing in graceful_stop.sh (appy: rev 
a396ae773a0f7dbcd6f54fc8133b7a9dc30d8431)
* bin/graceful_stop.sh
* hbase-shell/src/main/ruby/shell/commands/balancer_enabled.rb
* bin/rolling-restart.sh
* hbase-shell/src/main/ruby/shell/commands/balance_switch.rb


> Fix 'hbase shell' output parsing in graceful_stop.sh
> 
>
> Key: HBASE-16044
> URL: https://issues.apache.org/jira/browse/HBASE-16044
> Project: HBase
>  Issue Type: Bug
>  Components: scripts, shell
>Affects Versions: 2.0.0
>Reporter: Samir Ahmic
>Assignee: Appy
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16044.master.001.patch, 
> HBASE-16044.master.002.patch
>
>
> In some of our bash scripts we are piping command in hbase shell and then 
> parsing response to define variables.  Since 'hbase shell' output format is 
> changed we are picking wrong values from output Here is example form 
> gracful_stop.sh:
> {code}
> HBASE_BALANCER_STATE=$(echo 'balance_switch false' | "$bin"/hbase --config 
> "${HBASE_CONF_DIR}" shell | tail -3 | head -1)
> {code}
> this will return "balance_switch true" instead of previous balancer  state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: HBASE-16081-v5.patch

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081-v5.patch, HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-11 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Description: 
This is a sub-issue of 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is a 
small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. The 
main idea of HLC is described in 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with the 
motivation of adding it to HBase. 

What is this patch/issue about ?
This issue attempts to add a timestamp class to hbase-common and timestamp type 
to HTable. 
This is a part of the attempt to get HLC into HBase. This patch does not 
interfere with the current working of HBase.

Why Timestamp Class ?
Timestamp class can be as an abstraction to represent time in Hbase in 64 bits. 
It is just used for manipulating with the 64 bits of the timestamp and is not 
concerned about the actual time.
There are three types of timestamps. System time, Custom and HLC. Each one of 
it has methods to manipulate the 64 bits of timestamp. 

HTable changes: Added a timestamp type property to HTable. This will help HBase 
exist in conjunction with old type of timestamp and also the HLC which will be 
introduced. The default is set to custom timestamp(current way of usage of 
timestamp). default unset timestamp is also custom timestamp as it should be 
so. The default timestamp will be changed to HLC when HLC feature is introduced 
completely in HBase.

Suggestions are welcome. 


  was:
This is a sub-issue of 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is a 
small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. The 
main idea of HLC is described in 
[HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with the 
motivation of adding it to HBase. 

What is this path/issue about ?
This issue attempts to add a timestamp class to hbase-common and timestamp type 
to HTable. 
This is a part of the attempt to get HLC into HBase. This patch does not 
interfere with the current working of HBase.

Why Timestamp Class ?
Timestamp class can be as an abstraction to represent time in Hbase in 64 bits. 
It is just used for manipulating with the 64 bits of the timestamp and is not 
concerned about the actual time.
There are three types of timestamps. System time, Custom and HLC. Each one of 
it has methods to manipulate the 64 bits of timestamp. 

HTable changes: Added a timestamp type property to HTable. This will help HBase 
exist in conjunction with old type of timestamp and also the HLC which will be 
introduced. The default is set to custom timestamp(current way of usage of 
timestamp). default unset timestamp is also custom timestamp as it should be 
so. The default timestamp will be changed to HLC when HLC feature is introduced 
completely in HBase.

Suggestions are welcome. 



> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this patch/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Suggestions are welcome. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Attachment: HBASE-16081-v4.patch

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph updated HBASE-16081:
---
Status: Open  (was: Patch Available)

> Replication remove_peer gets stuck and blocks WAL rolling
> -
>
> Key: HBASE-16081
> URL: https://issues.apache.org/jira/browse/HBASE-16081
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication
>Affects Versions: 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Joseph
>Priority: Blocker
> Attachments: HBASE-16081-v2.patch, HBASE-16081-v3.patch, 
> HBASE-16081-v4.patch, HBASE-16081.patch
>
>
> We use a blocking take from CompletionService in 
> HBaseInterClusterReplicationEndpoint. When we remove a peer, we try to shut 
> down all threads gracefully. But, under certain race condition, the 
> underlying executor gets shutdown and the CompletionService#take will block 
> forever, which means the remove_peer call will never gracefully finish.
> Since ReplicationSourceManager#removePeer and 
> ReplicationSourceManager#recordLog lock on the same object, we are not able 
> to roll WALs in such a situation and will end up with gigantic WALs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371858#comment-15371858
 ] 

Elliott Clark commented on HBASE-15643:
---

-1 No map lookups in the hot code paths.

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16081) Replication remove_peer gets stuck and blocks WAL rolling

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371857#comment-15371857
 ] 

Hadoop QA commented on HBASE-16081:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 20s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 9s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 151m 3s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817236/HBASE-16081-v2.patch |
| JIRA Issue | HBASE-16081 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / a396ae7 |
| Default Java | 1.7.0_80 |
| Multi-JDK ve

[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-11 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Summary: Add Timestamp class to the hbase-common and Timestamp type to 
HTable.  (was: Add Timestamp class to the hbase-common and Timestamp type to 
HTable. Non Intrusive.)

> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this path/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Suggestions are welcome. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable.

2016-07-11 Thread Sai Teja Ranuva (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371828#comment-15371828
 ] 

Sai Teja Ranuva edited comment on HBASE-16210 at 7/11/16 11:04 PM:
---

patch attached. 


was (Author: saitejar):
First Patch for the task described in the issue.

> Add Timestamp class to the hbase-common and Timestamp type to HTable.
> -
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this path/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Suggestions are welcome. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Status: Patch Available  (was: Open)

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371855#comment-15371855
 ] 

Alicia Ying Shu commented on HBASE-15643:
-

There were some code refactors in master branch after my last patch. Rebased on 
master and uploaded the patch

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Attachment: HBASE-15643.patch

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
> Attachments: HBASE-15643.patch
>
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Status: Open  (was: Patch Available)

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated HBASE-15643:

Attachment: (was: HBASE-15643.patch)

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15643) Need metrics of cache hit ratio, etc for one table

2016-07-11 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371853#comment-15371853
 ] 

Alicia Ying Shu commented on HBASE-15643:
-

[~chenheng] Here is the link to review board. Thanks! 
https://reviews.apache.org/r/49932/

> Need metrics of cache hit ratio, etc for one table
> --
>
> Key: HBASE-15643
> URL: https://issues.apache.org/jira/browse/HBASE-15643
> Project: HBase
>  Issue Type: Improvement
>Reporter: Heng Chen
>Assignee: Alicia Ying Shu
>
> There are many tables on our cluster,  only some of them need to be read 
> online.  
> We could improve the performance of read by cache,  but we need some metrics 
> for it at table level. There are a few we can collect: BlockCacheCount, 
> BlockCacheSize, BlockCacheHitCount, BlockCacheMissCount, BlockCacheHitPercent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16092) Procedure v2 - complete child procedure support

2016-07-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16092:

Attachment: HBASE-16092-v1.patch

> Procedure v2 - complete child procedure support
> ---
>
> Key: HBASE-16092
> URL: https://issues.apache.org/jira/browse/HBASE-16092
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16092-v0.patch, HBASE-16092-v1.patch
>
>
> There was a missing part on the child procedure tracking.
> child procedure were never deleted from the wal on parent completion,
> leading to failures on startup. (no code uses child procedures yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16092) Procedure v2 - complete child procedure support

2016-07-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16092:

Attachment: (was: HBASE-16092-v1.patch)

> Procedure v2 - complete child procedure support
> ---
>
> Key: HBASE-16092
> URL: https://issues.apache.org/jira/browse/HBASE-16092
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16092-v0.patch, HBASE-16092-v1.patch
>
>
> There was a missing part on the child procedure tracking.
> child procedure were never deleted from the wal on parent completion,
> leading to failures on startup. (no code uses child procedures yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16092) Procedure v2 - complete child procedure support

2016-07-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16092:

Status: Patch Available  (was: Reopened)

> Procedure v2 - complete child procedure support
> ---
>
> Key: HBASE-16092
> URL: https://issues.apache.org/jira/browse/HBASE-16092
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16092-v0.patch, HBASE-16092-v1.patch
>
>
> There was a missing part on the child procedure tracking.
> child procedure were never deleted from the wal on parent completion,
> leading to failures on startup. (no code uses child procedures yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16211) JMXCacheBuster restarting the metrics system might cause tests to hang

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371838#comment-15371838
 ] 

Hadoop QA commented on HBASE-16211:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} master passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
9s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 43s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817276/hbase-16211_v1.patch |
| JIRA Issue | HBASE-16211 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf911.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / a396ae7 |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/jenkins-slave/tools/huds

[jira] [Updated] (HBASE-16092) Procedure v2 - complete child procedure support

2016-07-11 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-16092:

Attachment: HBASE-16092-v1.patch

> Procedure v2 - complete child procedure support
> ---
>
> Key: HBASE-16092
> URL: https://issues.apache.org/jira/browse/HBASE-16092
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16092-v0.patch, HBASE-16092-v1.patch
>
>
> There was a missing part on the child procedure tracking.
> child procedure were never deleted from the wal on parent completion,
> leading to failures on startup. (no code uses child procedures yet)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16210) Add Timestamp class to the hbase-common and Timestamp type to HTable. Non Intrusive.

2016-07-11 Thread Sai Teja Ranuva (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Teja Ranuva updated HBASE-16210:

Status: In Progress  (was: Patch Available)

> Add Timestamp class to the hbase-common and Timestamp type to HTable. Non 
> Intrusive.
> 
>
> Key: HBASE-16210
> URL: https://issues.apache.org/jira/browse/HBASE-16210
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sai Teja Ranuva
>Assignee: Sai Teja Ranuva
>Priority: Minor
>  Labels: patch, testing
> Attachments: HBASE-16210.master.1.patch
>
>
> This is a sub-issue of 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070]. This JIRA is 
> a small step towards completely adding Hybrid Logical Clocks(HLC) to HBase. 
> The main idea of HLC is described in 
> [HBase-14070|https://issues.apache.org/jira/browse/HBASE-14070] along with 
> the motivation of adding it to HBase. 
> What is this path/issue about ?
> This issue attempts to add a timestamp class to hbase-common and timestamp 
> type to HTable. 
> This is a part of the attempt to get HLC into HBase. This patch does not 
> interfere with the current working of HBase.
> Why Timestamp Class ?
> Timestamp class can be as an abstraction to represent time in Hbase in 64 
> bits. 
> It is just used for manipulating with the 64 bits of the timestamp and is not 
> concerned about the actual time.
> There are three types of timestamps. System time, Custom and HLC. Each one of 
> it has methods to manipulate the 64 bits of timestamp. 
> HTable changes: Added a timestamp type property to HTable. This will help 
> HBase exist in conjunction with old type of timestamp and also the HLC which 
> will be introduced. The default is set to custom timestamp(current way of 
> usage of timestamp). default unset timestamp is also custom timestamp as it 
> should be so. The default timestamp will be changed to HLC when HLC feature 
> is introduced completely in HBase.
> Suggestions are welcome. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >