[jira] [Commented] (HBASE-17300) Concurrently calling checkAndPut with expected value as null returns true unexpectedly

2016-12-12 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744453#comment-15744453
 ] 

Samarth Jain commented on HBASE-17300:
--

I changed code in Phoenix land to pass null for existing value which is when I 
found this bug.

The test fails with 0.98.17 too. I went back up to 0.98.15 and the test is 
failing there too. So looks like this issue has been around for a while.



> Concurrently calling checkAndPut with expected value as null returns true 
> unexpectedly
> --
>
> Key: HBASE-17300
> URL: https://issues.apache.org/jira/browse/HBASE-17300
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.23, 1.2.4
>Reporter: Samarth Jain
>
> Attached is the test case. I have added some comments so hopefully the test 
> makes sense. It actually is causing test failures on the Phoenix branches.
> The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
> with the 1.2 branch (failed twice in 5 tries). 
> {code}
> @Test
> public void testNullCheckAndPut() throws Exception {
> try (HBaseAdmin admin = TEST_UTIL.getHBaseAdmin()) {
> Callable c1 = new CheckAndPutCallable();
> Callable c2 = new CheckAndPutCallable();
> ExecutorService e = Executors.newFixedThreadPool(5);
> Future f1 = e.submit(c1);
> Future f2 = e.submit(c2);
> assertTrue(f1.get() || f2.get());
> assertFalse(f1.get() && f2.get());
> }
> }
> }
> 
> 
> private static final class CheckAndPutCallable implements 
> Callable {
> @Override
> public Boolean call() throws Exception {
> byte[] rowToLock = "ROW".getBytes();
> byte[] colFamily = "COLUMN_FAMILY".getBytes();
> byte[] column = "COLUMN".getBytes();
> byte[] newValue = "NEW_VALUE".getBytes();
> byte[] oldValue = "OLD_VALUE".getBytes();
> byte[] tableName = "table".getBytes();
> boolean acquired = false;
> try (HBaseAdmin admin = TEST_UTIL.getHBaseAdmin()) {
> HTableDescriptor tableDesc = new 
> HTableDescriptor(TableName.valueOf(tableName));
> HColumnDescriptor columnDesc = new 
> HColumnDescriptor(colFamily);
> columnDesc.setTimeToLive(600);
> tableDesc.addFamily(columnDesc);
> try {
> admin.createTable(tableDesc);
> } catch (TableExistsException e) {
> // ignore
> }
> try (HTableInterface table = 
> admin.getConnection().getTable(tableName)) {
> Put put = new Put(rowToLock);
> put.add(colFamily, column, oldValue); // add a row 
> with column set to oldValue
> table.put(put);
> put = new Put(rowToLock);
> put.add(colFamily, column, newValue);
> // only one of the threads should be able to get 
> return value of true for the expected value of oldValue
> acquired = table.checkAndPut(rowToLock, colFamily, 
> column, oldValue, put); 
> if (!acquired) {
>// if a thread didn't get true before, then it 
> shouldn't get true this time either
>// because the column DOES exist
>acquired = table.checkAndPut(rowToLock, colFamily, 
> column, null, put);
> }
> }
> }
> }  
> return acquired;
> }
> }
> {code}
> cc [~apurtell], [~jamestaylor], [~lhofhansl]. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17300) Concurrently calling checkAndPut with expected value as null returns true unexpectedly

2016-12-12 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744438#comment-15744438
 ] 

Samarth Jain commented on HBASE-17300:
--

Thanks! Updated the patch to use TEST_UTIL.getHBaseAdmin(). Hopefully this will 
make it easy to add it in an IT test.

> Concurrently calling checkAndPut with expected value as null returns true 
> unexpectedly
> --
>
> Key: HBASE-17300
> URL: https://issues.apache.org/jira/browse/HBASE-17300
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.23, 1.2.4
>Reporter: Samarth Jain
>
> Attached is the test case. I have added some comments so hopefully the test 
> makes sense. It actually is causing test failures on the Phoenix branches.
> The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
> with the 1.2 branch (failed twice in 5 tries). 
> {code}
> @Test
> public void testNullCheckAndPut() throws Exception {
> try (HBaseAdmin admin = TEST_UTIL.getHBaseAdmin()) {
> Callable c1 = new CheckAndPutCallable();
> Callable c2 = new CheckAndPutCallable();
> ExecutorService e = Executors.newFixedThreadPool(5);
> Future f1 = e.submit(c1);
> Future f2 = e.submit(c2);
> assertTrue(f1.get() || f2.get());
> assertFalse(f1.get() && f2.get());
> }
> }
> }
> 
> 
> private static final class CheckAndPutCallable implements 
> Callable {
> @Override
> public Boolean call() throws Exception {
> byte[] rowToLock = "ROW".getBytes();
> byte[] colFamily = "COLUMN_FAMILY".getBytes();
> byte[] column = "COLUMN".getBytes();
> byte[] newValue = "NEW_VALUE".getBytes();
> byte[] oldValue = "OLD_VALUE".getBytes();
> byte[] tableName = "table".getBytes();
> boolean acquired = false;
> try (HBaseAdmin admin = TEST_UTIL.getHBaseAdmin()) {
> HTableDescriptor tableDesc = new 
> HTableDescriptor(TableName.valueOf(tableName));
> HColumnDescriptor columnDesc = new 
> HColumnDescriptor(colFamily);
> columnDesc.setTimeToLive(600);
> tableDesc.addFamily(columnDesc);
> try {
> admin.createTable(tableDesc);
> } catch (TableExistsException e) {
> // ignore
> }
> try (HTableInterface table = 
> admin.getConnection().getTable(tableName)) {
> Put put = new Put(rowToLock);
> put.add(colFamily, column, oldValue); // add a row 
> with column set to oldValue
> table.put(put);
> put = new Put(rowToLock);
> put.add(colFamily, column, newValue);
> // only one of the threads should be able to get 
> return value of true for the expected value of oldValue
> acquired = table.checkAndPut(rowToLock, colFamily, 
> column, oldValue, put); 
> if (!acquired) {
>// if a thread didn't get true before, then it 
> shouldn't get true this time either
>// because the column DOES exist
>acquired = table.checkAndPut(rowToLock, colFamily, 
> column, null, put);
> }
> }
> }
> }  
> return acquired;
> }
> }
> {code}
> cc [~apurtell], [~jamestaylor], [~lhofhansl]. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17300) Concurrently calling checkAndPut with expected value as null returns true unexpectedly

2016-12-12 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated HBASE-17300:
-
Description: 
Attached is the test case. I have added some comments so hopefully the test 
makes sense. It actually is causing test failures on the Phoenix branches.

The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
with the 1.2 branch (failed twice in 5 tries). 

{code}
@Test
public void testNullCheckAndPut() throws Exception {
try (HBaseAdmin admin = TEST_UTIL.getHBaseAdmin()) {
Callable c1 = new CheckAndPutCallable();
Callable c2 = new CheckAndPutCallable();
ExecutorService e = Executors.newFixedThreadPool(5);
Future f1 = e.submit(c1);
Future f2 = e.submit(c2);
assertTrue(f1.get() || f2.get());
assertFalse(f1.get() && f2.get());
}
}
}


private static final class CheckAndPutCallable implements Callable 
{
@Override
public Boolean call() throws Exception {
byte[] rowToLock = "ROW".getBytes();
byte[] colFamily = "COLUMN_FAMILY".getBytes();
byte[] column = "COLUMN".getBytes();
byte[] newValue = "NEW_VALUE".getBytes();
byte[] oldValue = "OLD_VALUE".getBytes();
byte[] tableName = "table".getBytes();
boolean acquired = false;
try (HBaseAdmin admin = TEST_UTIL.getHBaseAdmin()) {
HTableDescriptor tableDesc = new 
HTableDescriptor(TableName.valueOf(tableName));
HColumnDescriptor columnDesc = new 
HColumnDescriptor(colFamily);
columnDesc.setTimeToLive(600);
tableDesc.addFamily(columnDesc);
try {
admin.createTable(tableDesc);
} catch (TableExistsException e) {
// ignore
}
try (HTableInterface table = 
admin.getConnection().getTable(tableName)) {
Put put = new Put(rowToLock);
put.add(colFamily, column, oldValue); // add a row with 
column set to oldValue
table.put(put);
put = new Put(rowToLock);
put.add(colFamily, column, newValue);
// only one of the threads should be able to get return 
value of true for the expected value of oldValue
acquired = table.checkAndPut(rowToLock, colFamily, 
column, oldValue, put); 
if (!acquired) {
   // if a thread didn't get true before, then it 
shouldn't get true this time either
   // because the column DOES exist
   acquired = table.checkAndPut(rowToLock, colFamily, 
column, null, put);
}
}
}
}  
return acquired;
}
}
{code}


cc [~apurtell], [~jamestaylor], [~lhofhansl]. 


  was:
Attached is the test case. I have added some comments so hopefully the test 
makes sense. It actually is causing test failures on the Phoenix branches.
PS - I am using a bit of Phoenix API to get hold of HBaseAdmin. But it should 
be fairly straightforward to adopt it for HBase IT tests. 

The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
with the 1.2 branch (failed twice in 5 tries). 

{code}
@Test
public void testNullCheckAndPut() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
try (HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
Callable c1 = new CheckAndPutCallable();
Callable c2 = new CheckAndPutCallable();
ExecutorService e = Executors.newFixedThreadPool(5);
Future f1 = e.submit(c1);
Future f2 = e.submit(c2);
assertTrue(f1.get() || f2.get());
assertFalse(f1.get() && f2.get());
}
}
}


private static final class CheckAndPutCallable implements Callable 
{
@Override
public Boolean call() throws Exception {
byte[] rowToLock = "ROW".getBytes();
byte[] colFamily = "COLUMN_FAMILY".getBytes();
byte[] column = "COLUMN".getBytes();
byte[] newValue = "NEW_VALUE".getBytes();
byte[] oldValue = "OLD_VALUE".getBytes();
byte[] tableName = "table".getBytes();
boolean acquired = false;
try (Connection conn = DriverManager.getConnection(getUrl())) {
try (HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
 

[jira] [Commented] (HBASE-17289) Avoid adding a replication peer named "lock"

2016-12-12 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744293#comment-15744293
 ] 

Guanghao Zhang commented on HBASE-17289:


[~mantonov] Is it ok to branch-1.3? This is minor fix for replication.

> Avoid adding a replication peer named "lock"
> 
>
> Key: HBASE-17289
> URL: https://issues.apache.org/jira/browse/HBASE-17289
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: HBASE-17289-branch-1.1.patch, 
> HBASE-17289-branch-1.2.patch, HBASE-17289-branch-1.3.patch, 
> HBASE-17289-branch-1.patch
>
>
> When zk based replication queue is used and useMulti is false, the steps of 
> transfer replication queues are first add a lock, then copy nodes, finally 
> clean old queue and the lock. And the default lock znode's name is "lock". So 
> we should avoid adding a peer named "lock". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17288) Add warn log for huge keyvalue and huge row

2016-12-12 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744276#comment-15744276
 ] 

Guanghao Zhang commented on HBASE-17288:


[~apurtell] Any ideas about the v1 patch?

> Add warn log for huge keyvalue and huge row
> ---
>
> Key: HBASE-17288
> URL: https://issues.apache.org/jira/browse/HBASE-17288
> Project: HBase
>  Issue Type: Improvement
>  Components: scan
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Attachments: HBASE-17288-v1.patch, HBASE-17288.patch
>
>
> Some log examples from our production cluster.
> {code}
> 2016-12-10,17:08:11,478 WARN 
> org.apache.hadoop.hbase.regionserver.StoreScanner: adding a HUGE KV into 
> result list, kv size:1253360, 
> kv:10567114001-1-c/R:r1/1481360887152/Put/vlen=1253245/ts=923099, from 
> table X
> 2016-12-10,17:08:16,724 WARN 
> org.apache.hadoop.hbase.regionserver.StoreScanner: adding a HUGE KV into 
> result list, kv size:1048680, 
> kv:0220459/I:i_0/1481360889551/Put/vlen=1048576/ts=13642, from table XX
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17296) Provide per peer throttling for replication

2016-12-12 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17296:
---
Assignee: Guanghao Zhang
  Status: Patch Available  (was: Open)

> Provide per peer throttling for replication
> ---
>
> Key: HBASE-17296
> URL: https://issues.apache.org/jira/browse/HBASE-17296
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17296.patch
>
>
> HBASE-9501 added a config to provide throttling for replication. And each 
> peer has same bandwidth up limit. In our use case, one cluster may have 
> several peers and several slave clusters. Each slave cluster may have 
> different scales and need different bandwidth up limit for each peer. So We 
> add bandwidth to replication peer config and provide a shell cmd set_peer 
> bandwidth to update the bandwidth in need. It has been used for a long time 
> on our clusters.  Any suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17296) Provide per peer throttling for replication

2016-12-12 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17296:
---
Attachment: HBASE-17296.patch

> Provide per peer throttling for replication
> ---
>
> Key: HBASE-17296
> URL: https://issues.apache.org/jira/browse/HBASE-17296
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Guanghao Zhang
> Attachments: HBASE-17296.patch
>
>
> HBASE-9501 added a config to provide throttling for replication. And each 
> peer has same bandwidth up limit. In our use case, one cluster may have 
> several peers and several slave clusters. Each slave cluster may have 
> different scales and need different bandwidth up limit for each peer. So We 
> add bandwidth to replication peer config and provide a shell cmd set_peer 
> bandwidth to update the bandwidth in need. It has been used for a long time 
> on our clusters.  Any suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17233) See if we should replace System.arrayCopy with Arrays.copyOfRange

2016-12-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744202#comment-15744202
 ] 

ramkrishna.s.vasudevan commented on HBASE-17233:


Let me try that out. Thanks for taking a look here.

> See if we should replace System.arrayCopy with Arrays.copyOfRange
> -
>
> Key: HBASE-17233
> URL: https://issues.apache.org/jira/browse/HBASE-17233
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>
> Just saw this interesting comment in PB code. Since we deal with byte[] 
> extensively (when we are onheap) we do lot of copies too.
> {code}
> * One of the noticeable costs of copying a byte[] into a new array using
>* {@code System.arraycopy} is nullification of a new buffer before the 
> copy. It has been shown
>* the Hotspot VM is capable to intrisicfy {@code Arrays.copyOfRange} 
> operation to avoid this
>* expensive nullification and provide substantial performance gain. 
> Unfortunately this does not
>* hold on Android runtimes and could make the copy slightly slower due to 
> additional code in
>* the {@code Arrays.copyOfRange}. 
> {code}
> So since we are hotspot VM we could see if the places we use System.arrayCopy 
> can be replaced with Arrays.copyOfRange.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17291) Remove ImmutableSegment#getKeyValueScanner

2016-12-12 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744199#comment-15744199
 ] 

ramkrishna.s.vasudevan commented on HBASE-17291:


I won't commit this till HBASE-17081 is done.

> Remove ImmutableSegment#getKeyValueScanner
> --
>
> Key: HBASE-17291
> URL: https://issues.apache.org/jira/browse/HBASE-17291
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
>
> This is based on a discussion over [~anastas]'s patch. The MemstoreSnapshot 
> uses a KeyValueScanner which actually seems redundant considering we already 
> have a SegmentScanner. The idea is that the snapshot scanner should be a 
> simple iterator type of scanner but it lacks the capability to do the 
> reference counting on that segment that is now used in snapshot. With 
> snapshot having mulitple segments in the latest impl it is better we hold on 
> to the segment by doing ref counting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17278) Cell Scanner Implementation to be used by ResultScanner

2016-12-12 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744192#comment-15744192
 ] 

Enis Soztutar commented on HBASE-17278:
---

- Can you remove the DLOG statements (or maybe move them to a debug method): 
{code}
+#if 0
{code}
 - Let's keep Cell class to be only the data structure, and move parsing / 
encoding logic to KeyValueCodec class. We can use the same Codec structure here 
as well with the Encoder and Decoder classes.  
 - Normally, CellScanner is an interface in Java with different 
implementations. {{Result}} also implements the CellScanner so, it maybe useful 
to have this to be an abstract base class, and the KeyValueDecoder to be an 
implementation. 
 - Let's use ntohs and friends if possible for dealing with endianness. 
(https://linux.die.net/man/3/ntohs) rather than SwapByteOrder methods. 
 - Can we do this with zero-copy by overtaking the buffer: 
{code}
+void CellScanner::SetData(char *cell_block_data, int data_length) {
+  cell_block_data_.reset(new char[data_length]);
+  std::memcpy(cell_block_data_.get(), cell_block_data, data_length);
+  data_length_ = data_length;
+  cur_pos_ = 0;
+}
{code}
The only CellScanner implementation for now will be the scanner that will work 
on top of the IPC buffers, so we do not need to memcpy the bytes. Better yet, 
we should use the 
https://github.com/facebook/folly/blob/master/folly/io/IOBuf.h for the 
CellScanner since that is what will be coming from the network using wangle. 
- Do we need the second Cell wrapper here? 
{code}
new Cell(*Cell::ParseCellData(*current_cell_data))
{code}



> Cell Scanner Implementation to be used by ResultScanner
> ---
>
> Key: HBASE-17278
> URL: https://issues.apache.org/jira/browse/HBASE-17278
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-17278.HBASE-14850.v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17301) TestSimpleRpcScheduler#testCoDelScheduling is broken in master

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744181#comment-15744181
 ] 

Hadoop QA commented on HBASE-17301:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 55s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 23s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 24s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842922/HBASE-17301-master-001.patch
 |
| JIRA Issue | HBASE-17301 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 799dd7083d8d 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / adb319f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4886/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4886/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestSimpleRpcScheduler#testCoDelScheduling is broken in master
> --
>
> Key: HBASE-17301
> URL: https://issues.apache.org/jira/browse/HBASE-17301
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: 

[jira] [Updated] (HBASE-16874) Fix TestMasterFailoverWithProcedures and ensure single proc-executor for kill/restart tests

2016-12-12 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-16874:
-
Fix Version/s: 1.1.8

FYI this was committed to master and 1.1 but nothing in between.

> Fix TestMasterFailoverWithProcedures and ensure single proc-executor for 
> kill/restart tests
> ---
>
> Key: HBASE-16874
> URL: https://issues.apache.org/jira/browse/HBASE-16874
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Reporter: Ted Yu
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 2.0.0, 1.1.8
>
> Attachments: 16874.v1.txt, HBASE-16874-v0.patch, HBASE-16874-v1.patch
>
>
> When examining failed test :
> https://builds.apache.org/job/HBase-TRUNK_matrix/lastCompletedBuild/jdk=JDK%201.8%20(latest),label=yahoo-not-h2/testReport/org.apache.hadoop.hbase.master.procedure/TestMasterFailoverWithProcedures/org_apache_hadoop_hbase_master_procedure_TestMasterFailoverWithProcedures/
> I noticed the following:
> {code}
> 2016-10-18 18:47:39,313 INFO  [Time-limited test] 
> procedure.TestMasterFailoverWithProcedures(306): Restart 2 exec state: 
> TRUNCATE_TABLE_CLEAR_FS_LAYOUT
> Exception in thread "ProcedureExecutorWorker-1" java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.stop(ProcedureExecutor.java:533)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1197)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:959)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$700(ProcedureExecutor.java:73)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1405)
> {code}
> This seems to be the result of race between stop() and join() methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16678) MapReduce jobs do not update counters from ScanMetrics

2016-12-12 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-16678:
-
Fix Version/s: (was: 1.1.8)
   1.1.7

> MapReduce jobs do not update counters from ScanMetrics
> --
>
> Key: HBASE-16678
> URL: https://issues.apache.org/jira/browse/HBASE-16678
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 1.2.4
>
> Attachments: hbase-16678_v1.patch
>
>
> Was inspecting a perf issue, where we needed the scanner metrics as counters 
> for a MR job. Turns out that the HBase scan counters are no longer working in 
> 1.0+. I think it got broken via HBASE-13030. 
> These are the counters:
> {code}
>   HBase Counters
>   BYTES_IN_REMOTE_RESULTS=0
>   BYTES_IN_RESULTS=280
>   MILLIS_BETWEEN_NEXTS=11
>   NOT_SERVING_REGION_EXCEPTION=0
>   NUM_SCANNER_RESTARTS=0
>   NUM_SCAN_RESULTS_STALE=0
>   REGIONS_SCANNED=1
>   REMOTE_RPC_CALLS=0
>   REMOTE_RPC_RETRIES=0
>   RPC_CALLS=3
>   RPC_RETRIES=0
> {code}
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16033) Add more details in logging of responseTooSlow/TooLarge

2016-12-12 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744057#comment-15744057
 ] 

Nick Dimiduk commented on HBASE-16033:
--

Yes, yes I did. Looking at the commit, I grabbed the ticket from the parent 
sha's commit line.

Phoenix guys are saying it's NBD, so we'll move on. Thanks for following up!

> Add more details in logging of responseTooSlow/TooLarge
> ---
>
> Key: HBASE-16033
> URL: https://issues.apache.org/jira/browse/HBASE-16033
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.3, 1.1.7
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21, 1.2.4, 1.1.8
>
> Attachments: HBASE-16033.patch, HBASE-16033.patch, HBASE-16033.patch
>
>
> Currently the log message when responseTooSlow/TooLarge is like:
> {noformat}
> 2016-06-08 12:18:04,363 WARN  
> [B.defaultRpcServer.handler=127,queue=10,port=16020]
> ipc.RpcServer: (responseTooSlow): 
> {"processingtimems":13125,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)",
> "client":"11.251.158.22:36331","starttimems":1465359471238,"queuetimems":1540116,
> "class":"HRegionServer","responsesize":17,"method":"Multi"}
> {noformat}
> which is kind of helpless for debugging since we don't know on which 
> table/region/row the request is against.
> What's more, we could see some if-else check in the {{RpcServer#logResponse}} 
> method which trying to do sth different when the {{param}} includes instance 
> of {{Operation}}, but there's only one place invoking {{logResponse}} and the 
> {{param}} is always an instance of {{Message}}. Checking the change history, 
> I believe this is a left-over cleanup in work of HBASE-8214 
> We will address the above issues, do some cleanup and improve the log just 
> like {{RpcServer$Call#toString}} does to include table/region/row information 
> of the request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16033) Add more details in logging of responseTooSlow/TooLarge

2016-12-12 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744058#comment-15744058
 ] 

Nick Dimiduk commented on HBASE-16033:
--

Okay, thanks [~apurtell]

> Add more details in logging of responseTooSlow/TooLarge
> ---
>
> Key: HBASE-16033
> URL: https://issues.apache.org/jira/browse/HBASE-16033
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.3, 1.1.7
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0, 0.98.21, 1.2.4, 1.1.8
>
> Attachments: HBASE-16033.patch, HBASE-16033.patch, HBASE-16033.patch
>
>
> Currently the log message when responseTooSlow/TooLarge is like:
> {noformat}
> 2016-06-08 12:18:04,363 WARN  
> [B.defaultRpcServer.handler=127,queue=10,port=16020]
> ipc.RpcServer: (responseTooSlow): 
> {"processingtimems":13125,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)",
> "client":"11.251.158.22:36331","starttimems":1465359471238,"queuetimems":1540116,
> "class":"HRegionServer","responsesize":17,"method":"Multi"}
> {noformat}
> which is kind of helpless for debugging since we don't know on which 
> table/region/row the request is against.
> What's more, we could see some if-else check in the {{RpcServer#logResponse}} 
> method which trying to do sth different when the {{param}} includes instance 
> of {{Operation}}, but there's only one place invoking {{logResponse}} and the 
> {{param}} is always an instance of {{Message}}. Checking the change history, 
> I believe this is a left-over cleanup in work of HBASE-8214 
> We will address the above issues, do some cleanup and improve the log just 
> like {{RpcServer$Call#toString}} does to include table/region/row information 
> of the request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16820) BulkLoad mvcc visibility only works accidentally

2016-12-12 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-16820:
-
Fix Version/s: (was: 1.1.8)
   1.1.9

> BulkLoad mvcc visibility only works accidentally 
> -
>
> Key: HBASE-16820
> URL: https://issues.apache.org/jira/browse/HBASE-16820
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.8
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 1.1.9
>
> Attachments: HBASE-16820-branch-1.1-v0.patch
>
>
> [~sergey.soldatov] has been debugging an issue with a 1.1 code base where the 
> commit for HBASE-16721 broke the bulk load visibility. After bulk load, the 
> bulk load files is not visible because the sequence id assigned to the bulk 
> load is not advanced in mvcc. 
> Debugging further, we have noticed that bulk load behavior is wrong, but it 
> works "accidentally" in all code bases (but broken in 1.1 after HBASE-16721). 
> Let me explain: 
>  - BL request can optionally request a flush before hand (this should be the 
> default) which causes the flush to happen with some sequenceId. The flush 
> sequence id is one past all the cells' sequenceids. This flush sequence id is 
> returned as a result to the flush operation. 
>  - BL then uses this particular sequenceId to mark the files, but itself does 
> not get a new sequenceid of its own, or advance the mvcc number. 
>  - BL completes WITHOUT making sure that the sequence id is visible. 
>  - BL itself though writes entries to the WAL for the BL event, which in 1.2 
> code bases goes through the whole mvcc + seqId paths, which makes sure that 
> earlier sequenceIds (the flush sequenceId) are visible via mvcc. 
> The problem with 1.1 is that the WAL entries only get sequence ids, but do 
> not touch mvcc. With the patch for HBASE-16721, we have made it so that the 
> flushedSequenceId is not used in mvcc as the highest read point (although all 
> the data is still visible).
> BL relying on the flush sequence id is wrong for two reasons: 
>  - BL files are loaded with the flush sequence id from the memstore. This 
> particular sequence id is used twice for two different things and ends up 
> being the sequence id for flushed file as well as BL'ed files. 
>  - BL should make sure that it gets a new sequence id and that sequence id is 
> visible before returning the results. 
> [~ndimiduk] FYI. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16820) BulkLoad mvcc visibility only works accidentally

2016-12-12 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-16820:
-
Priority: Critical  (was: Blocker)

Alright thanks for confirming.

> BulkLoad mvcc visibility only works accidentally 
> -
>
> Key: HBASE-16820
> URL: https://issues.apache.org/jira/browse/HBASE-16820
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.8
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Critical
> Fix For: 1.1.9
>
> Attachments: HBASE-16820-branch-1.1-v0.patch
>
>
> [~sergey.soldatov] has been debugging an issue with a 1.1 code base where the 
> commit for HBASE-16721 broke the bulk load visibility. After bulk load, the 
> bulk load files is not visible because the sequence id assigned to the bulk 
> load is not advanced in mvcc. 
> Debugging further, we have noticed that bulk load behavior is wrong, but it 
> works "accidentally" in all code bases (but broken in 1.1 after HBASE-16721). 
> Let me explain: 
>  - BL request can optionally request a flush before hand (this should be the 
> default) which causes the flush to happen with some sequenceId. The flush 
> sequence id is one past all the cells' sequenceids. This flush sequence id is 
> returned as a result to the flush operation. 
>  - BL then uses this particular sequenceId to mark the files, but itself does 
> not get a new sequenceid of its own, or advance the mvcc number. 
>  - BL completes WITHOUT making sure that the sequence id is visible. 
>  - BL itself though writes entries to the WAL for the BL event, which in 1.2 
> code bases goes through the whole mvcc + seqId paths, which makes sure that 
> earlier sequenceIds (the flush sequenceId) are visible via mvcc. 
> The problem with 1.1 is that the WAL entries only get sequence ids, but do 
> not touch mvcc. With the patch for HBASE-16721, we have made it so that the 
> flushedSequenceId is not used in mvcc as the highest read point (although all 
> the data is still visible).
> BL relying on the flush sequence id is wrong for two reasons: 
>  - BL files are loaded with the flush sequence id from the memstore. This 
> particular sequence id is used twice for two different things and ends up 
> being the sequence id for flushed file as well as BL'ed files. 
>  - BL should make sure that it gets a new sequence id and that sequence id is 
> visible before returning the results. 
> [~ndimiduk] FYI. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744030#comment-15744030
 ] 

Chang chen edited comment on HBASE-17295 at 12/13/16 3:37 AM:
--

The latest namespace region was created half  year ago, log was already rolled.

{code}
$ hadoop fs -ls  
/hbasedata/hbase-common/data/hbase/namespace/cb95abb94ed97849733d6a4d9c1d40aa
Found 3 items
-rw-r--r--   3 hbase hbase 42 2016-02-26 12:30 
/hbasedata/hbase-common/data/hbase/namespace/cb95abb94ed97849733d6a4d9c1d40aa/.regioninfo
{code}

Interesting, Master still balances these two regions, and hbck report: 
{code}
ERROR: (region 
hbase:namespace,,1456460815614.cb95abb94ed97849733d6a4d9c1d40aa.) Multiple 
regions have the same startkey: 
ERROR: (region 
hbase:namespace,,1437013955376.1f6a26f3018010b3753663711e441682.) Multiple 
regions have the same startkey: 
ERROR: Found inconsistency in table hbase:namespace
{code}




was (Author: baibaichen):
The latest namespace region was created half  year ago, log was already rolled.

{quote}
$ hadoop fs -ls  
/hbasedata/hbase-common/data/hbase/namespace/cb95abb94ed97849733d6a4d9c1d40aa
Found 3 items
-rw-r--r--   3 hbase hbase 42 2016-02-26 12:30 
/hbasedata/hbase-common/data/hbase/namespace/cb95abb94ed97849733d6a4d9c1d40aa/.regioninfo
{quote}

Interesting, Master still balances these two regions, and hbck report: 
{code}
ERROR: (region 
hbase:namespace,,1456460815614.cb95abb94ed97849733d6a4d9c1d40aa.) Multiple 
regions have the same startkey: 
ERROR: (region 
hbase:namespace,,1437013955376.1f6a26f3018010b3753663711e441682.) Multiple 
regions have the same startkey: 
ERROR: Found inconsistency in table hbase:namespace
{code}



> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {code:title=HRegion#checkSplit}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> //.
> }
> {code}
> But recently,  I see two namespace regions  in our production deployment. It 
> may be cased by restarting when cluster is in certain state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744030#comment-15744030
 ] 

Chang chen commented on HBASE-17295:


The latest namespace region was created half  year ago, log was already rolled.

{quote}
$ hadoop fs -ls  
/hbasedata/hbase-common/data/hbase/namespace/cb95abb94ed97849733d6a4d9c1d40aa
Found 3 items
-rw-r--r--   3 hbase hbase 42 2016-02-26 12:30 
/hbasedata/hbase-common/data/hbase/namespace/cb95abb94ed97849733d6a4d9c1d40aa/.regioninfo
{quote}

Interesting, Master still balances these two regions, and hbck report: 
{code}
ERROR: (region 
hbase:namespace,,1456460815614.cb95abb94ed97849733d6a4d9c1d40aa.) Multiple 
regions have the same startkey: 
ERROR: (region 
hbase:namespace,,1437013955376.1f6a26f3018010b3753663711e441682.) Multiple 
regions have the same startkey: 
ERROR: Found inconsistency in table hbase:namespace
{code}



> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {code:title=HRegion#checkSplit}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> //.
> }
> {code}
> But recently,  I see two namespace regions  in our production deployment. It 
> may be cased by restarting when cluster is in certain state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17300) Concurrently calling checkAndPut with expected value as null returns true unexpectedly

2016-12-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15744018#comment-15744018
 ] 

Andrew Purtell edited comment on HBASE-17300 at 12/13/16 3:32 AM:
--

Set affected versions as reported. Let me see as part of 0.98.24 work if a port 
of this to HBase API only will repro the issue. If so, if an earlier version 
worked correctly (0.98.17 perhaps?). If so, what commit broke things.


was (Author: apurtell):
Set fix versions as reported. Let me see as part of 0.98.24 work if a port of 
this to HBase API only will repro the issue. If so, if an earlier version 
worked correctly (0.98.17 perhaps?). If so, what commit broke things.

> Concurrently calling checkAndPut with expected value as null returns true 
> unexpectedly
> --
>
> Key: HBASE-17300
> URL: https://issues.apache.org/jira/browse/HBASE-17300
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.23, 1.2.4
>Reporter: Samarth Jain
>
> Attached is the test case. I have added some comments so hopefully the test 
> makes sense. It actually is causing test failures on the Phoenix branches.
> PS - I am using a bit of Phoenix API to get hold of HBaseAdmin. But it should 
> be fairly straightforward to adopt it for HBase IT tests. 
> The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
> with the 1.2 branch (failed twice in 5 tries). 
> {code}
> @Test
> public void testNullCheckAndPut() throws Exception {
>  try (Connection conn = DriverManager.getConnection(getUrl())) {
> try (HBaseAdmin admin = 
> conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
> Callable c1 = new CheckAndPutCallable();
> Callable c2 = new CheckAndPutCallable();
> ExecutorService e = Executors.newFixedThreadPool(5);
> Future f1 = e.submit(c1);
> Future f2 = e.submit(c2);
> assertTrue(f1.get() || f2.get());
> assertFalse(f1.get() && f2.get());
> }
> }
> }
> 
> 
> private static final class CheckAndPutCallable implements 
> Callable {
> @Override
> public Boolean call() throws Exception {
> byte[] rowToLock = "ROW".getBytes();
> byte[] colFamily = "COLUMN_FAMILY".getBytes();
> byte[] column = "COLUMN".getBytes();
> byte[] newValue = "NEW_VALUE".getBytes();
> byte[] oldValue = "OLD_VALUE".getBytes();
> byte[] tableName = "table".getBytes();
> boolean acquired = false;
> try (Connection conn = DriverManager.getConnection(getUrl())) {
> try (HBaseAdmin admin = 
> conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
> HTableDescriptor tableDesc = new 
> HTableDescriptor(TableName.valueOf(tableName));
> HColumnDescriptor columnDesc = new 
> HColumnDescriptor(colFamily);
> columnDesc.setTimeToLive(600);
> tableDesc.addFamily(columnDesc);
> try {
> admin.createTable(tableDesc);
> } catch (TableExistsException e) {
> // ignore
> }
> try (HTableInterface table = 
> admin.getConnection().getTable(tableName)) {
> Put put = new Put(rowToLock);
> put.add(colFamily, column, oldValue); // add a row 
> with column set to oldValue
> table.put(put);
> put = new Put(rowToLock);
> put.add(colFamily, column, newValue);
> // only one of the threads should be able to get 
> return value of true for the expected value of oldValue
> acquired = table.checkAndPut(rowToLock, colFamily, 
> column, oldValue, put); 
> if (!acquired) {
>// if a thread didn't get true before, then it 
> shouldn't get true this time either
>// because the column DOES exist
>acquired = table.checkAndPut(rowToLock, colFamily, 
> column, null, put);
> }
> }
> }
> }  
> return acquired;
> }
> }
> {code}
> cc [~apurtell], [~jamestaylor], [~lhofhansl]. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17300) Concurrently calling checkAndPut with expected value as null returns true unexpectedly

2016-12-12 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17300:
---
Affects Version/s: 0.98.23
   1.2.4

Set fix versions as reported. Let me see as part of 0.98.24 work if a port of 
this to HBase API only will repro the issue. If so, if an earlier version 
worked correctly (0.98.17 perhaps?). If so, what commit broke things.

> Concurrently calling checkAndPut with expected value as null returns true 
> unexpectedly
> --
>
> Key: HBASE-17300
> URL: https://issues.apache.org/jira/browse/HBASE-17300
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.23, 1.2.4
>Reporter: Samarth Jain
>
> Attached is the test case. I have added some comments so hopefully the test 
> makes sense. It actually is causing test failures on the Phoenix branches.
> PS - I am using a bit of Phoenix API to get hold of HBaseAdmin. But it should 
> be fairly straightforward to adopt it for HBase IT tests. 
> The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
> with the 1.2 branch (failed twice in 5 tries). 
> {code}
> @Test
> public void testNullCheckAndPut() throws Exception {
>  try (Connection conn = DriverManager.getConnection(getUrl())) {
> try (HBaseAdmin admin = 
> conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
> Callable c1 = new CheckAndPutCallable();
> Callable c2 = new CheckAndPutCallable();
> ExecutorService e = Executors.newFixedThreadPool(5);
> Future f1 = e.submit(c1);
> Future f2 = e.submit(c2);
> assertTrue(f1.get() || f2.get());
> assertFalse(f1.get() && f2.get());
> }
> }
> }
> 
> 
> private static final class CheckAndPutCallable implements 
> Callable {
> @Override
> public Boolean call() throws Exception {
> byte[] rowToLock = "ROW".getBytes();
> byte[] colFamily = "COLUMN_FAMILY".getBytes();
> byte[] column = "COLUMN".getBytes();
> byte[] newValue = "NEW_VALUE".getBytes();
> byte[] oldValue = "OLD_VALUE".getBytes();
> byte[] tableName = "table".getBytes();
> boolean acquired = false;
> try (Connection conn = DriverManager.getConnection(getUrl())) {
> try (HBaseAdmin admin = 
> conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
> HTableDescriptor tableDesc = new 
> HTableDescriptor(TableName.valueOf(tableName));
> HColumnDescriptor columnDesc = new 
> HColumnDescriptor(colFamily);
> columnDesc.setTimeToLive(600);
> tableDesc.addFamily(columnDesc);
> try {
> admin.createTable(tableDesc);
> } catch (TableExistsException e) {
> // ignore
> }
> try (HTableInterface table = 
> admin.getConnection().getTable(tableName)) {
> Put put = new Put(rowToLock);
> put.add(colFamily, column, oldValue); // add a row 
> with column set to oldValue
> table.put(put);
> put = new Put(rowToLock);
> put.add(colFamily, column, newValue);
> // only one of the threads should be able to get 
> return value of true for the expected value of oldValue
> acquired = table.checkAndPut(rowToLock, colFamily, 
> column, oldValue, put); 
> if (!acquired) {
>// if a thread didn't get true before, then it 
> shouldn't get true this time either
>// because the column DOES exist
>acquired = table.checkAndPut(rowToLock, colFamily, 
> column, null, put);
> }
> }
> }
> }  
> return acquired;
> }
> }
> {code}
> cc [~apurtell], [~jamestaylor], [~lhofhansl]. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17298) remove unused code in HRegion#doMiniBatchMutation

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743969#comment-15743969
 ] 

Hadoop QA commented on HBASE-17298:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 47s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 110m 11s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 147m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.2 Server=1.12.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842902/HBASE-17298-master-001.patch
 |
| JIRA Issue | HBASE-17298 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux cd02d956d9cb 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 
16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / adb319f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4885/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4885/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> remove unused code in HRegion#doMiniBatchMutation
> -
>
> Key: HBASE-17298
> URL: https://issues.apache.org/jira/browse/HBASE-17298
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>   

[jira] [Commented] (HBASE-17301) TestSimpleRpcScheduler#testCoDelScheduling is broken in master

2016-12-12 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743960#comment-15743960
 ] 

huaxiang sun commented on HBASE-17301:
--

Thread name was changed to with prefix "RpcServer.deafult.BQ.Codel.handler".

> TestSimpleRpcScheduler#testCoDelScheduling is broken in master
> --
>
> Key: HBASE-17301
> URL: https://issues.apache.org/jira/browse/HBASE-17301
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-17301-master-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17301) TestSimpleRpcScheduler#testCoDelScheduling is broken in master

2016-12-12 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-17301:
-
Status: Patch Available  (was: Open)

> TestSimpleRpcScheduler#testCoDelScheduling is broken in master
> --
>
> Key: HBASE-17301
> URL: https://issues.apache.org/jira/browse/HBASE-17301
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-17301-master-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17301) TestSimpleRpcScheduler#testCoDelScheduling is broken in master

2016-12-12 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-17301:
-
Attachment: HBASE-17301-master-001.patch

> TestSimpleRpcScheduler#testCoDelScheduling is broken in master
> --
>
> Key: HBASE-17301
> URL: https://issues.apache.org/jira/browse/HBASE-17301
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-17301-master-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17301) TestSimpleRpcScheduler#testCoDelScheduling is broken in master

2016-12-12 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-17301:


 Summary: TestSimpleRpcScheduler#testCoDelScheduling is broken in 
master
 Key: HBASE-17301
 URL: https://issues.apache.org/jira/browse/HBASE-17301
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0
Reporter: huaxiang sun
Assignee: huaxiang sun
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743941#comment-15743941
 ] 

Hudson commented on HBASE-17297:


SUCCESS: Integrated in Jenkins build HBase-1.4 #564 (See 
[https://builds.apache.org/job/HBase-1.4/564/])
HBASE-17297 Single Filter in parenthesis cannot be parsed correctly (tedyu: rev 
1f9214bee7dbec345cdf8a613f7b5da64ff827a7)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java


> Single Filter in parenthesis cannot be parsed correctly
> ---
>
> Key: HBASE-17297
> URL: https://issues.apache.org/jira/browse/HBASE-17297
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Xuesen Liang
>Assignee: Xuesen Liang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17297.master.patch
>
>
> {code}
> hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
> 0 row(s) in 0.1800 seconds
> hbase(main):011:0> scan 'testtable'
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> (ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
> 0 row(s) in 0.0100 seconds
> {code}
> In the last scan, we should got a row.
> The inner cause is that the last filter is parsed incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743880#comment-15743880
 ] 

binlijin edited comment on HBASE-15756 at 12/13/16 2:29 AM:


The current RpcServer is complicated, it support plain data/security/encryption 
data...
We only use plain data, do not use security/encryption. I think it need time to 
stabilize the Netty4RpcServer with all function and do not break the wire 
format. And i think make RpcServer pluggable can make it easy to implement 
Netty4RpcServer and switch from RpcServer to Netty4RpcServer. When 
Netty4RpcServer is stable than we can delete the current RpcServer.


was (Author: aoxiang):
The current RpcServer is complicated, it support plain data/security/encryption 
data...
We only use plain data, do not use security/encryption. I think it need time to 
stabilize the Netty4RpcServer with all function and do not break the wire 
format. And i think make RpcServer pluggable can make it easy to implement 
Netty4RpcServer and switch from RpcServer to Netty4RpcServer. When 
Netty4RpcServer is stabile than we can delete the current RpcServer.

> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743889#comment-15743889
 ] 

binlijin commented on HBASE-15756:
--

I add metrics to record how many responses handled by Handlers and Responder.
When share connections:
"numResponsesWriteByResponder" : 79995359,
"numResponsesWriteByHandler" : 578055,

When not share connections:
"numResponsesWriteByResponder" : 88855,
"numResponsesWriteByHandler" : 137469140,

So when share connections, most of the response write to client by Responder. 

> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17300) Concurrently calling checkAndPut with expected value as null returns true unexpectedly

2016-12-12 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743886#comment-15743886
 ] 

Andrew Purtell commented on HBASE-17300:


bq. I am using a bit of Phoenix API to get hold of HBaseAdmin. But it should be 
fairly straightforward to adopt it for HBase IT tests. 
You're more likely to get engagement from the HBase community if the repro case 
just works with the HBase API. FWIW

> Concurrently calling checkAndPut with expected value as null returns true 
> unexpectedly
> --
>
> Key: HBASE-17300
> URL: https://issues.apache.org/jira/browse/HBASE-17300
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> Attached is the test case. I have added some comments so hopefully the test 
> makes sense. It actually is causing test failures on the Phoenix branches.
> PS - I am using a bit of Phoenix API to get hold of HBaseAdmin. But it should 
> be fairly straightforward to adopt it for HBase IT tests. 
> The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
> with the 1.2 branch (failed twice in 5 tries). 
> {code}
> @Test
> public void testNullCheckAndPut() throws Exception {
>  try (Connection conn = DriverManager.getConnection(getUrl())) {
> try (HBaseAdmin admin = 
> conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
> Callable c1 = new CheckAndPutCallable();
> Callable c2 = new CheckAndPutCallable();
> ExecutorService e = Executors.newFixedThreadPool(5);
> Future f1 = e.submit(c1);
> Future f2 = e.submit(c2);
> assertTrue(f1.get() || f2.get());
> assertFalse(f1.get() && f2.get());
> }
> }
> }
> 
> 
> private static final class CheckAndPutCallable implements 
> Callable {
> @Override
> public Boolean call() throws Exception {
> byte[] rowToLock = "ROW".getBytes();
> byte[] colFamily = "COLUMN_FAMILY".getBytes();
> byte[] column = "COLUMN".getBytes();
> byte[] newValue = "NEW_VALUE".getBytes();
> byte[] oldValue = "OLD_VALUE".getBytes();
> byte[] tableName = "table".getBytes();
> boolean acquired = false;
> try (Connection conn = DriverManager.getConnection(getUrl())) {
> try (HBaseAdmin admin = 
> conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
> HTableDescriptor tableDesc = new 
> HTableDescriptor(TableName.valueOf(tableName));
> HColumnDescriptor columnDesc = new 
> HColumnDescriptor(colFamily);
> columnDesc.setTimeToLive(600);
> tableDesc.addFamily(columnDesc);
> try {
> admin.createTable(tableDesc);
> } catch (TableExistsException e) {
> // ignore
> }
> try (HTableInterface table = 
> admin.getConnection().getTable(tableName)) {
> Put put = new Put(rowToLock);
> put.add(colFamily, column, oldValue); // add a row 
> with column set to oldValue
> table.put(put);
> put = new Put(rowToLock);
> put.add(colFamily, column, newValue);
> // only one of the threads should be able to get 
> return value of true for the expected value of oldValue
> acquired = table.checkAndPut(rowToLock, colFamily, 
> column, oldValue, put); 
> if (!acquired) {
>// if a thread didn't get true before, then it 
> shouldn't get true this time either
>// because the column DOES exist
>acquired = table.checkAndPut(rowToLock, colFamily, 
> column, null, put);
> }
> }
> }
> }  
> return acquired;
> }
> }
> {code}
> cc [~apurtell], [~jamestaylor], [~lhofhansl]. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743880#comment-15743880
 ] 

binlijin commented on HBASE-15756:
--

The current RpcServer is complicated, it support plain data/security/encryption 
data...
We only use plain data, do not use security/encryption. I think it need time to 
stabilize the Netty4RpcServer with all function and do not break the wire 
format. And i think make RpcServer pluggable can make it easy to implement 
Netty4RpcServer and switch from RpcServer to Netty4RpcServer. When 
Netty4RpcServer is stabile than we can delete the current RpcServer.

> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743861#comment-15743861
 ] 

binlijin edited comment on HBASE-15756 at 12/13/16 2:16 AM:


And i also test the Netty4RpcServer perf on master version, the Netty4RpcServer 
can get at  https://github.com/binlijin/hbase/tree/NettyRpcServer.
1 regionserver, 1 client machine, client and rs on different machine.
Write a table with 10M rows, then split into 32 regions, every row have only 
one cell, keyLength=10B, valueLength=256B.
Only test random read, and all data cache in LruBlockCache.
Client: YCSB + hbase-1.1.2 client, start 16 YCSB process on client machine, and 
every process have 32 threads.

share connections:  use YCSB build with 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0
No share connections: use YCSB build without 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0

The result is: 
(Master + Netty)+  ( RpcClientImpl   + No share connections ) = 600K/S
(Master + SingleResponder) +  ( RpcClientImpl   + No share connections ) = 
600K/S

(Master + Netty)+  ( RpcClientImpl   + share connections ) = 500K/S
(Master + MultiResponder)+  ( RpcClientImpl   + share connections ) = 500K/S
(Master + SingleResponder) +  ( RpcClientImpl   + share connections ) = 
295K/S

(Master + MultiResponder)  +   ( AsyncRpcClient  + share connections ) = 570K/S 
  (Netty client)
(Master + Netty)  +  (AsyncRpcClient  + share connections ) =  620K/S  (Netty 
client)

MultiResponder:  see patch MultiResponder.master.patch



was (Author: aoxiang):
And i also test the Netty4RpcServer perf on master version, the Netty4RpcServer 
can get at  https://github.com/binlijin/hbase/tree/NettyRpcServer.
1 regionserver, 1 client machine, client and rs on different machine.
Write a table with 10M rows, then split into 32 regions, every row have only 
one cell, keyLength=10B, valueLength=256B.
Only test random read, and all data cache in LruBlockCache.
Client: YCSB + hbase-1.1.2 client, start 16 YCSB process on client machine, and 
every process have 32 threads.

share connections:  use YCSB build with 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0
No share connections: use YCSB build without 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0

The result is: 
(Master + Netty)+  ( RpcClientImpl   + No share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + No share connections ) = 
500K

(Master + Netty)+  ( RpcClientImpl   + share connections ) = 500K
(Master + MultiResponder)+  ( RpcClientImpl   + share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + share connections ) = 295K

(Master + MultiResponder)  +   ( AsyncRpcClient  + share connections ) = 570K   
(Netty client)
(Master + Netty)  +  (AsyncRpcClient  + share connections ) =  620K  (Netty 
client)

MultiResponder:  see patch MultiResponder.master.patch


> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743861#comment-15743861
 ] 

binlijin edited comment on HBASE-15756 at 12/13/16 2:15 AM:


And i also test the Netty4RpcServer perf on master version, the Netty4RpcServer 
can get at  https://github.com/binlijin/hbase/tree/NettyRpcServer.
1 regionserver, 1 client machine, client and rs on different machine.
Write a table with 10M rows, then split into 32 regions, every row have only 
one cell, keyLength=10B, valueLength=256B.
Only test random read, and all data cache in LruBlockCache.
Client: YCSB + hbase-1.1.2 client, start 16 YCSB process on client machine, and 
every process have 32 threads.

share connections:  use YCSB build with 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0
No share connections: use YCSB build without 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0

The result is: 
(Master + Netty)+  ( RpcClientImpl   + No share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + No share connections ) = 
500K

(Master + Netty)+  ( RpcClientImpl   + share connections ) = 500K
(Master + MultiResponder)+  ( RpcClientImpl   + share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + share connections ) = 295K

(Master + MultiResponder)  +   ( AsyncRpcClient  + share connections ) = 570K   
(Netty client)
(Master + Netty)  +  (AsyncRpcClient  + share connections ) =  620K  (Netty 
client)

MultiResponder:  see patch MultiResponder.master.patch



was (Author: aoxiang):
And i also test the Netty4RpcServer perf on master version, the Netty4RpcServer 
can get at  https://github.com/binlijin/hbase/tree/NettyRpcServer.
1 regionserver, 1 client machine, client and rs on different machine.
Write a table with 10M rows, then split into 32 regions, every row have only 
one cell, keyLength=10B, valueLength=256B.
Only test random read, and all data cache in LruBlockCache.
Client: YCSB + hbase-1.1.2 client, start 16 YCSB process on client machine, and 
every process have 32 threads.

share connections:  use YCSB build with 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0
No share connections: use YCSB build without 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0

The result is: 
(Master + Netty)+  ( RpcClientImpl   + No share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + No share connections ) = 
500K

(Master + Netty)+  ( RpcClientImpl   + share connections ) = 500K
(Master + MultiResponder)+  ( RpcClientImpl   + share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + share connections ) = 295K

(Master + MultiResponder)  +   ( AsyncRpcClient  + share connections ) = 570K   
(Netty client)
(Master + Netty)  +  (AsyncRpcClient  + share connections ) =  620K  (Netty 
client)

MultiResponder:  see patch 
MultiResponder.master.patch/MultiResponder.branch-1.patch


> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743861#comment-15743861
 ] 

binlijin edited comment on HBASE-15756 at 12/13/16 2:13 AM:


And i also test the Netty4RpcServer perf on master version, the Netty4RpcServer 
can get at  https://github.com/binlijin/hbase/tree/NettyRpcServer.
1 regionserver, 1 client machine, client and rs on different machine.
Write a table with 10M rows, then split into 32 regions, every row have only 
one cell, keyLength=10B, valueLength=256B.
Only test random read, and all data cache in LruBlockCache.
Client: YCSB + hbase-1.1.2 client, start 16 YCSB process on client machine, and 
every process have 32 threads.

share connections:  use YCSB build with 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0
No share connections: use YCSB build without 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0

The result is: 
(Master + Netty)+  ( RpcClientImpl   + No share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + No share connections ) = 
500K

(Master + Netty)+  ( RpcClientImpl   + share connections ) = 500K
(Master + MultiResponder)+  ( RpcClientImpl   + share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + share connections ) = 295K

(Master + MultiResponder)  +   ( AsyncRpcClient  + share connections ) = 570K   
(Netty client)
(Master + Netty)  +  (AsyncRpcClient  + share connections ) =  620K  (Netty 
client)

MultiResponder:  see patch 
MultiResponder.master.patch/MultiResponder.branch-1.patch



was (Author: aoxiang):
And i also test the Netty4RpcServer perf on master version, the Netty4RpcServer 
can get at  https://github.com/binlijin/hbase/tree/NettyRpcServer.
1 regionserver, 1 client machine, client and rs on different machine.
Write a table with 10M rows, then split into 32 regions, every row have only 
one cell, keyLength=10B, valueLength=256B.
Only test random read, and all data cache in LruBlockCache.
Client: YCSB + hbase-1.1.2 client, start 16 YCSB process on client machine, and 
every process have 32 threads.
share connections:  use YCSB build with 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0
No share connections: use YCSB build without 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0

The result is: 
(Master + Netty)+  ( RpcClientImpl   + No share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + No share connections ) = 
500K

(Master + Netty)+  ( RpcClientImpl   + share connections ) = 500K
(Master + MultiResponder)+  ( RpcClientImpl   + share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + share connections ) = 295K

(Master + MultiResponder)  +   ( AsyncRpcClient  + share connections ) = 570K   
(Netty client)
(Master + Netty)  +  (AsyncRpcClient  + share connections ) =  620K  (Netty 
client)

MultiResponder:  see patch 
MultiResponder.master.patch/MultiResponder.branch-1.patch


> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743861#comment-15743861
 ] 

binlijin commented on HBASE-15756:
--

And i also test the Netty4RpcServer perf on master version, the Netty4RpcServer 
can get at  https://github.com/binlijin/hbase/tree/NettyRpcServer.
1 regionserver, 1 client machine, client and rs on different machine.
Write a table with 10M rows, then split into 32 regions, every row have only 
one cell, keyLength=10B, valueLength=256B.
Only test random read, and all data cache in LruBlockCache.
Client: YCSB + hbase-1.1.2 client, start 16 YCSB process on client machine, and 
every process have 32 threads.
share connections:  use YCSB build with 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0
No share connections: use YCSB build without 
https://github.com/brianfrankcooper/YCSB/commit/57e1ab5a0cae13d1766c8fa6bdbf9d9117ee50d0

The result is: 
(Master + Netty)+  ( RpcClientImpl   + No share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + No share connections ) = 
500K

(Master + Netty)+  ( RpcClientImpl   + share connections ) = 500K
(Master + MultiResponder)+  ( RpcClientImpl   + share connections ) = 500K
(Master + SingleResponder) +  ( RpcClientImpl   + share connections ) = 295K

(Master + MultiResponder)  +   ( AsyncRpcClient  + share connections ) = 570K   
(Netty client)
(Master + Netty)  +  (AsyncRpcClient  + share connections ) =  620K  (Netty 
client)

MultiResponder:  see patch 
MultiResponder.master.patch/MultiResponder.branch-1.patch


> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17300) Concurrently calling checkAndPut with expected value as null returns true unexpectedly

2016-12-12 Thread Samarth Jain (JIRA)
Samarth Jain created HBASE-17300:


 Summary: Concurrently calling checkAndPut with expected value as 
null returns true unexpectedly
 Key: HBASE-17300
 URL: https://issues.apache.org/jira/browse/HBASE-17300
 Project: HBase
  Issue Type: Bug
Reporter: Samarth Jain


Attached is the test case. I have added some comments so hopefully the test 
makes sense. It actually is causing test failures on the Phoenix branches.
PS - I am using a bit of Phoenix API to get hold of HBaseAdmin. But it should 
be fairly straightforward to adopt it for HBase IT tests. 

The test fails consistently using HBase-0.98.23. It exhibits flappy behavior 
with the 1.2 branch (failed twice in 5 tries). 

{code}
@Test
public void testNullCheckAndPut() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
try (HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
Callable c1 = new CheckAndPutCallable();
Callable c2 = new CheckAndPutCallable();
ExecutorService e = Executors.newFixedThreadPool(5);
Future f1 = e.submit(c1);
Future f2 = e.submit(c2);
assertTrue(f1.get() || f2.get());
assertFalse(f1.get() && f2.get());
}
}
}


private static final class CheckAndPutCallable implements Callable 
{
@Override
public Boolean call() throws Exception {
byte[] rowToLock = "ROW".getBytes();
byte[] colFamily = "COLUMN_FAMILY".getBytes();
byte[] column = "COLUMN".getBytes();
byte[] newValue = "NEW_VALUE".getBytes();
byte[] oldValue = "OLD_VALUE".getBytes();
byte[] tableName = "table".getBytes();
boolean acquired = false;
try (Connection conn = DriverManager.getConnection(getUrl())) {
try (HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
HTableDescriptor tableDesc = new 
HTableDescriptor(TableName.valueOf(tableName));
HColumnDescriptor columnDesc = new 
HColumnDescriptor(colFamily);
columnDesc.setTimeToLive(600);
tableDesc.addFamily(columnDesc);
try {
admin.createTable(tableDesc);
} catch (TableExistsException e) {
// ignore
}
try (HTableInterface table = 
admin.getConnection().getTable(tableName)) {
Put put = new Put(rowToLock);
put.add(colFamily, column, oldValue); // add a row with 
column set to oldValue
table.put(put);
put = new Put(rowToLock);
put.add(colFamily, column, newValue);
// only one of the threads should be able to get return 
value of true for the expected value of oldValue
acquired = table.checkAndPut(rowToLock, colFamily, 
column, oldValue, put); 
if (!acquired) {
   // if a thread didn't get true before, then it 
shouldn't get true this time either
   // because the column DOES exist
   acquired = table.checkAndPut(rowToLock, colFamily, 
column, null, put);
}
}
}
}  
return acquired;
}
}
{code}


cc [~apurtell], [~jamestaylor], [~lhofhansl]. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743826#comment-15743826
 ] 

binlijin commented on HBASE-15756:
--

We run NettyRpcServer on production for two months in Alibaba search. It use 
netty3 not nettey4.
The performance improvements you can see it at Cluser_total_QPS.png, which come 
from our online A/B test cluster (with 450 physical machines, and each with 
256G memory + 64 core) with real world workloads.
When use SimpleRpcServer the total qps is less than 20M/s, when use 
NettyRpcServer, the total qps is more than 30M/s.

> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15756) Pluggable RpcServer

2016-12-12 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-15756:
-
Attachment: Cluster_total_QPS.png

> Pluggable RpcServer
> ---
>
> Key: HBASE-15756
> URL: https://issues.apache.org/jira/browse/HBASE-15756
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, rpc
>Reporter: binlijin
>Assignee: binlijin
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: Cluster_total_QPS.png, MultiResponder.branch-1.patch, 
> MultiResponder.master.patch, Netty4RpcServer_forperf.patch, 
> NettyRpcServer.patch, NettyRpcServer_forperf.patch, 
> PooledByteBufAllocator.patch, PooledByteBufAllocator2.patch, gc.png, 
> gets.png, gets.png, idle.png, patched.vs.patched_and_cached.vs.no_patch.png, 
> queue.png
>
>
> Current we use a simple RpcServer, and can not configure and use other 
> implementation.This issue is to make the RpcServer pluggable, so we can make 
> other implementation for example netty rpc server. Patch will upload laterly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17298) remove unused code in HRegion#doMiniBatchMutation

2016-12-12 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-17298:
-
Status: Patch Available  (was: Open)

> remove unused code in HRegion#doMiniBatchMutation
> -
>
> Key: HBASE-17298
> URL: https://issues.apache.org/jira/browse/HBASE-17298
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-17298-master-001.patch
>
>
> In HReigon#doMiniBatchMutation(), there is the following code 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3194
> which is not used anymore. They can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17298) remove unused code in HRegion#doMiniBatchMutation

2016-12-12 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-17298:
-
Attachment: HBASE-17298-master-001.patch

> remove unused code in HRegion#doMiniBatchMutation
> -
>
> Key: HBASE-17298
> URL: https://issues.apache.org/jira/browse/HBASE-17298
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-17298-master-001.patch
>
>
> In HReigon#doMiniBatchMutation(), there is the following code 
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3194
> which is not used anymore. They can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16845) Run tests under hbase-spark module in test phase

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743574#comment-15743574
 ] 

Ted Yu commented on HBASE-16845:


Pointer to where skipITs appears in source code would help.

I haven't found it in either master branch or 0.98 branch.

> Run tests under hbase-spark module in test phase
> 
>
> Key: HBASE-16845
> URL: https://issues.apache.org/jira/browse/HBASE-16845
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0
>
> Attachments: 16845.v1.txt, 16845.v2.txt
>
>
> [~appy] reported over in the discussion thread titled 'Testing and CI' that 
> tests under hbase-spark module are not run by QA bot.
> This issue is to run the unit tests in test phase so that we detect build 
> breakage early.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743432#comment-15743432
 ] 

Hadoop QA commented on HBASE-17081:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
0s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 56s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 126m 7s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842871/HBASE-17081-V06.patch 
|
| JIRA Issue | HBASE-17081 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 0c9e31c5ecb3 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / adb319f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4884/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4884/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, 

[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2016-12-12 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743417#comment-15743417
 ] 

Josh Elser commented on HBASE-17286:


bq. does this happen on branch-1 or earlier

I tested branch-1 and it did not happen there. I have only seen this on master 
so far.

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16820) BulkLoad mvcc visibility only works accidentally

2016-12-12 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743390#comment-15743390
 ] 

Enis Soztutar commented on HBASE-16820:
---

Hey Nick, sorry the problem has been fixed already via an addendum in 
HBASE-16721 for 1.1. See my comment on 
https://issues.apache.org/jira/browse/HBASE-16721?focusedCommentId=15570346=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15570346.
 

This issue is for a longer term fix, should not be a blocker for the release. 

> BulkLoad mvcc visibility only works accidentally 
> -
>
> Key: HBASE-16820
> URL: https://issues.apache.org/jira/browse/HBASE-16820
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.8
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>Priority: Blocker
> Fix For: 1.1.8
>
> Attachments: HBASE-16820-branch-1.1-v0.patch
>
>
> [~sergey.soldatov] has been debugging an issue with a 1.1 code base where the 
> commit for HBASE-16721 broke the bulk load visibility. After bulk load, the 
> bulk load files is not visible because the sequence id assigned to the bulk 
> load is not advanced in mvcc. 
> Debugging further, we have noticed that bulk load behavior is wrong, but it 
> works "accidentally" in all code bases (but broken in 1.1 after HBASE-16721). 
> Let me explain: 
>  - BL request can optionally request a flush before hand (this should be the 
> default) which causes the flush to happen with some sequenceId. The flush 
> sequence id is one past all the cells' sequenceids. This flush sequence id is 
> returned as a result to the flush operation. 
>  - BL then uses this particular sequenceId to mark the files, but itself does 
> not get a new sequenceid of its own, or advance the mvcc number. 
>  - BL completes WITHOUT making sure that the sequence id is visible. 
>  - BL itself though writes entries to the WAL for the BL event, which in 1.2 
> code bases goes through the whole mvcc + seqId paths, which makes sure that 
> earlier sequenceIds (the flush sequenceId) are visible via mvcc. 
> The problem with 1.1 is that the WAL entries only get sequence ids, but do 
> not touch mvcc. With the patch for HBASE-16721, we have made it so that the 
> flushedSequenceId is not used in mvcc as the highest read point (although all 
> the data is still visible).
> BL relying on the flush sequence id is wrong for two reasons: 
>  - BL files are loaded with the flush sequence id from the memstore. This 
> particular sequence id is used twice for two different things and ends up 
> being the sequence id for flushed file as well as BL'ed files. 
>  - BL should make sure that it gets a new sequence id and that sequence id is 
> visible before returning the results. 
> [~ndimiduk] FYI. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2016-12-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743361#comment-15743361
 ] 

Sean Busbey commented on HBASE-17286:
-

just to confirm, does this happen on branch-1 or earlier?

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17289) Avoid adding a replication peer named "lock"

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743330#comment-15743330
 ] 

Hudson commented on HBASE-17289:


FAILURE: Integrated in Jenkins build HBase-1.4 #563 (See 
[https://builds.apache.org/job/HBase-1.4/563/])
HBASE-17289 Avoid adding a replication peer named "lock" (tedyu: rev 
30576991bcaf6917378117e2ec24e024203b3611)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java


> Avoid adding a replication peer named "lock"
> 
>
> Key: HBASE-17289
> URL: https://issues.apache.org/jira/browse/HBASE-17289
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: HBASE-17289-branch-1.1.patch, 
> HBASE-17289-branch-1.2.patch, HBASE-17289-branch-1.3.patch, 
> HBASE-17289-branch-1.patch
>
>
> When zk based replication queue is used and useMulti is false, the steps of 
> transfer replication queues are first add a lock, then copy nodes, finally 
> clean old queue and the lock. And the default lock znode's name is "lock". So 
> we should avoid adding a peer named "lock". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17286) LICENSE.txt in binary tarball contains only ASL text

2016-12-12 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-17286:

Component/s: community

> LICENSE.txt in binary tarball contains only ASL text
> 
>
> Key: HBASE-17286
> URL: https://issues.apache.org/jira/browse/HBASE-17286
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Josh Elser
>Priority: Blocker
> Fix For: 2.0.0
>
>
> Noticed this one today because I needed to make sure LICENSE was getting 
> updated for a patch-in-progress.
> What I'm presently seeing after invoking {{mvn clean package assembly:single 
> -DskipTests -Drat.skip -Prelease}} on master is that the LICENSE.txt file 
> contains only the ASL text (which I know for certain it should contain BSD 
> and MIT as well).
> I checked branch-1.2 which has lots of extra greatness, so it seems like 
> something happened in master which broke this. Filing this now so we can try 
> to bisect and figure out what happened.
> FYI, this is the one I was chatting you about [~busbey].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17294) External Configuration for Memory Compaction

2016-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743225#comment-15743225
 ] 

stack commented on HBASE-17294:
---

Hey [~eshcar]. On the patch:

HColumnDescriptor is a public-facing class.  The enums in MemoryCompaction and 
the new methods need javadoc. This will be where operators configure Accordion 
so needs a bit of heft I'd say, more than is here.

Its ok removing these defines because they have not been part of a release yet?

56// Configuration options for MemStore compaction  
57static final String INDEX_COMPACTION_CONFIG = "index-compaction"; 

58static final String DATA_COMPACTION_CONFIG  = "data-compaction";  

59  
60// The external setting of the compacting MemStore behaviour  
61// Compaction of the index without the data is the default
62static final String COMPACTING_MEMSTORE_TYPE_KEY = 
"hbase.hregion.compacting.memstore.type";  
63static final String COMPACTING_MEMSTORE_TYPE_DEFAULT = 
INDEX_COMPACTION_CONFIG;


Should the  String.valueOf(BASIC) in the below be 
COMPACTING_MEMSTORE_TYPE_DEFAULT?

277 String compType = compactingMemStore.getConfiguration().get(
278 CompactingMemStore.COMPACTING_MEMSTORE_TYPE_KEY,
279 String.valueOf(BASIC));


Yeah,  no to * imports. We don't do that.

You don't want to take arg here?

378 if(opts.inMemoryCompaction) {
379   family.setInMemoryCompaction(true);   379   
family.setInMemoryCompaction(HColumnDescriptor.MemoryCompaction.BASIC);
380 }   380 }
381 desc.addFamily(family); 381 de

... i.e. let user set NONE, BASIC, EAGER? Would help testing the options.

On setting NONE in tests, is that because test fail otherwise? Would be cool to 
parameterize tests so they ran three times -- NONE, BASIC, and EAGER -- but 
that can be another issue.

I haven't tried it but does allow me set any of the three options from the 
shell?

family.setInMemoryCompaction(
819   
JBoolean.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::IN_MEMORY_COMPACTION)))
 if 
arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::IN_MEMORY_COMPACTION)   
819   
org.apache.hadoop.hbase.HColumnDescriptor.MemoryCompaction.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::IN_MEMORY_COMPACTION)))
 if 
arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::IN_MEMORY_COMPACTION)

... or do you need to add some new defines?

Thanks [~eshcar] 



> External Configuration for Memory Compaction 
> -
>
> Key: HBASE-17294
> URL: https://issues.apache.org/jira/browse/HBASE-17294
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17294-V01.patch, HBASE-17294-V02.patch
>
>
> We would like to have a single external knob to control memstore compaction.
> Possible memstore compaction policies are none, basic, and eager.
> This sub-task allows to set this property at the column family level at table 
> creation time:
> {code}
> create ‘’,
>{NAME => ‘’, 
> IN_MEMORY_COMPACTION => ‘’}
> {code}
> or to set this at the global configuration level by setting the property in 
> hbase-site.xml, with BASIC being the default value:
> {code}
> 
>   hbase.hregion.compacting.memstore.type
>   
> 
> {code}
> The values used in this property can change as memstore compaction policies 
> evolve over time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14160) backport hbase-spark module to branch-1

2016-12-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743220#comment-15743220
 ] 

Sean Busbey commented on HBASE-14160:
-

they're linked from this ticket as one of "depends upon" or "is blocked by"

> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 14160.branch-1.v1.txt
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743219#comment-15743219
 ] 

Hudson commented on HBASE-17297:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2121 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2121/])
HBASE-17297 Single Filter in parenthesis cannot be parsed correctly (tedyu: rev 
adb319f5c23e8813ac8cc5a27a483caf825dfdda)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java


> Single Filter in parenthesis cannot be parsed correctly
> ---
>
> Key: HBASE-17297
> URL: https://issues.apache.org/jira/browse/HBASE-17297
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Xuesen Liang
>Assignee: Xuesen Liang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17297.master.patch
>
>
> {code}
> hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
> 0 row(s) in 0.1800 seconds
> hbase(main):011:0> scan 'testtable'
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> (ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
> 0 row(s) in 0.0100 seconds
> {code}
> In the last scan, we should got a row.
> The inner cause is that the last filter is parsed incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17142) Implement multi get

2016-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743146#comment-15743146
 ] 

stack commented on HBASE-17142:
---

Sound like fixes (smile).

> Implement multi get
> ---
>
> Key: HBASE-17142
> URL: https://issues.apache.org/jira/browse/HBASE-17142
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17142.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743139#comment-15743139
 ] 

Anastasia Braginsky commented on HBASE-17081:
-

Published slides on the meetup page. Thanks, [~stack]!

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBaseMeetupDecember2016-V02.pptx, Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17285) Misconfiguration of JVM GC options in HADOOP_CLIENT_OPTS may break `bin/hbase`

2016-12-12 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743110#comment-15743110
 ] 

Josh Elser commented on HBASE-17285:


bq. Unfortunately, all of the _OPTS handling in most of the Hadoop ecosystem 
scripts I've looked at do very bad things and are pretty much dependent upon 
using space delimiters. This means no, folks can't properly quote it in scripts 
and there are some limitations on these values. This obviously causes other 
problems (the biggest one probably being the inability to use directory paths 
with spaces) which is why shellcheck is throwing a fit.

Ok, I'm not completely off my rocker :)

bq. The only real solution I've found is to convert them all to arrays...

Awesome, thanks for the tip! I'll try to take a look at what you've done 
upstream and try to clean things up downstream. Thanks for the quick review!

> Misconfiguration of JVM GC options in HADOOP_CLIENT_OPTS may break `bin/hbase`
> --
>
> Key: HBASE-17285
> URL: https://issues.apache.org/jira/browse/HBASE-17285
> Project: HBase
>  Issue Type: Bug
>  Components: scripts
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17285.001.patch
>
>
> Had the great fun of digging through this one. Had a user reporting that 
> hiveserver2 was no longer finding HBase jars on the classpath. This is 
> supposed to happen via {{hbase mapredcp}}.
> It turned out that they had configured hive-env.sh to set 
> {{HADOOP_CLIENT_OPTS="-XX:+PrintGCDetails"}} (among other things), which 
> creates a big multi-line string instead of just a directory. Because of poor 
> quoting in {{bin/hbase}}, this gives you a wonderfully intuitive error:
> {noformat}
> Error: Could not find or load main class Heap
> {noformat}
> That {{Heap}} is actually from the JVM GC details that it was told to print. 
> While I don't expect this to be a common problem people run into, it's one 
> that we can address with better quoting. e.g.
> {noformat}
> + exec 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/bin/java 
> -Dproc_mapredcp '-XX:OnOutOfMemoryError=kill -9 %p' -XX:+UseConcMarkSweepGC 
> -Dhbase.log.dir=/usr/local/lib/hbase//logs -Dhbase.log.file=hbase.log 
> -Dhbase.home.dir=/usr/local/lib/hbase/ -Dhbase.id.str= 
> -Dhbase.root.logger=INFO,console 
> '-Djava.library.path='\''/usr/local/lib/hadoop//lib/native' Heap PSYoungGen 
> total 76800K, used 7942K '[0x0007f550,' 0x0007faa8, 
> '0x0008)' eden space 66048K, 12% used 
> '[0x0007f550,0x0007f5cc19c0,0x0007f958)' from space 
> 10752K, 0% used '[0x0007fa00,0x0007fa00,0x0007faa8)' 
> to space 10752K, 0% used 
> '[0x0007f958,0x0007f958,0x0007fa00)' ParOldGen total 
> 174592K, used 0K '[0x0007e000,' 0x0007eaa8, 
> '0x0007f550)' object space 174592K, 0% used 
> '[0x0007e000,0x0007e000,0x0007eaa8)' PSPermGen total 
> 21504K, used 2756K '[0x0007dae0,' 0x0007dc30, 
> '0x0007e000)' object space 21504K, 12% used 
> '[0x0007dae0,0x0007db0b11b8,0x0007dc30)'\''' 
> -Dhbase.security.logger=INFO,NullAppender 
> org.apache.hadoop.hbase.util.MapreduceDependencyClasspathTool
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-17025) [Shell] Support space quota get/set via the shell

2016-12-12 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-17025:
--

Assignee: Josh Elser

> [Shell] Support space quota get/set via the shell
> -
>
> Key: HBASE-17025
> URL: https://issues.apache.org/jira/browse/HBASE-17025
> Project: HBase
>  Issue Type: Sub-task
>  Components: shell
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17025.001.patch
>
>
> Need to make sure that admins can use the shell to get/set the new space 
> quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743096#comment-15743096
 ] 

stack commented on HBASE-17081:
---

Paste your slides up on the meetup page [~anastas]?

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBaseMeetupDecember2016-V02.pptx, Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17081:
--
Attachment: HBASE-17081-V06.patch

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> HBASE-17081-V06.patch, HBASE-17081-V06.patch, 
> HBaseMeetupDecember2016-V02.pptx, Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17299) Add integration tests for FavoredNodes feature

2016-12-12 Thread Thiruvel Thirumoolan (JIRA)
Thiruvel Thirumoolan created HBASE-17299:


 Summary: Add integration tests for FavoredNodes feature
 Key: HBASE-17299
 URL: https://issues.apache.org/jira/browse/HBASE-17299
 Project: HBase
  Issue Type: Sub-task
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan


The tests will include write to tables and check if the store file's block 
locations match the favored nodes for that region.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17281) FN should use datanode port from hdfs configuration

2016-12-12 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-17281:
-
Attachment: HBASE-17281.master.002.patch

> FN should use datanode port from hdfs configuration
> ---
>
> Key: HBASE-17281
> URL: https://issues.apache.org/jira/browse/HBASE-17281
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17281.master.001.patch, 
> HBASE-17281.master.002.patch
>
>
> Currently we use the ServerName port for providing favored node hints. We 
> should use the DN port from hdfs-site.xml instead to avoid warning messages 
> in region server logs. The warnings will be from this section of HDFS code, 
> it moves across classes.
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1758
> {code}
>   private boolean[] getPinnings(DatanodeInfo[] nodes) {
> if (favoredNodes == null) {
>   return null;
> } else {
>   boolean[] pinnings = new boolean[nodes.length];
>   HashSet favoredSet = new HashSet<>(Arrays.asList(favoredNodes));
>   for (int i = 0; i < nodes.length; i++) {
> pinnings[i] = favoredSet.remove(nodes[i].getXferAddrWithHostname());
> LOG.debug("{} was chosen by name node (favored={}).",
> nodes[i].getXferAddrWithHostname(), pinnings[i]);
>   }
>   if (!favoredSet.isEmpty()) {
> // There is one or more favored nodes that were not allocated.
> LOG.warn("These favored nodes were specified but not chosen: "
> + favoredSet + " Specified favored nodes: "
> + Arrays.toString(favoredNodes));
>   }
>   return pinnings;
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17100) Implement Chore to sync FN info from Master to RegionServers

2016-12-12 Thread Thiruvel Thirumoolan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15743059#comment-15743059
 ] 

Thiruvel Thirumoolan commented on HBASE-17100:
--

I missed it in the design doc since it was pretty small. I updated the design 
doc @ 
https://docs.google.com/document/d/1LruSkA2y7QgxfvjgUG69-3Lqh80PhFk1BkxAg5CHyJU/edit#heading=h.gwee35kqe9p9

Since redistribute/complete_redistribute/remove_fn all change the FN for 
regions, its likely one/more RPC syncs to the region servers fail. So this 
Chore keeps FN in all the regionservers updated.

> Implement Chore to sync FN info from Master to RegionServers
> 
>
> Key: HBASE-17100
> URL: https://issues.apache.org/jira/browse/HBASE-17100
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-17100.master.001.patch, HBASE_17100_draft.patch
>
>
> Master will have a repair chore which will periodically sync fn information 
> from master to all the region servers. This will protect against rpc failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17297:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.4.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Xuesen

> Single Filter in parenthesis cannot be parsed correctly
> ---
>
> Key: HBASE-17297
> URL: https://issues.apache.org/jira/browse/HBASE-17297
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Xuesen Liang
>Assignee: Xuesen Liang
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 17297.master.patch
>
>
> {code}
> hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
> 0 row(s) in 0.1800 seconds
> hbase(main):011:0> scan 'testtable'
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> (ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
> 0 row(s) in 0.0100 seconds
> {code}
> In the last scan, we should got a row.
> The inner cause is that the last filter is parsed incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-17297:
--

Assignee: Xuesen Liang

> Single Filter in parenthesis cannot be parsed correctly
> ---
>
> Key: HBASE-17297
> URL: https://issues.apache.org/jira/browse/HBASE-17297
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Xuesen Liang
>Assignee: Xuesen Liang
> Attachments: 17297.master.patch
>
>
> {code}
> hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
> 0 row(s) in 0.1800 seconds
> hbase(main):011:0> scan 'testtable'
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> (ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
> 0 row(s) in 0.0100 seconds
> {code}
> In the last scan, we should got a row.
> The inner cause is that the last filter is parsed incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17298) remove unused code in HRegion#doMiniBatchMutation

2016-12-12 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-17298:


 Summary: remove unused code in HRegion#doMiniBatchMutation
 Key: HBASE-17298
 URL: https://issues.apache.org/jira/browse/HBASE-17298
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 2.0.0
Reporter: huaxiang sun
Assignee: huaxiang sun
Priority: Minor


In HReigon#doMiniBatchMutation(), there is the following code 

https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L3194

which is not used anymore. They can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742945#comment-15742945
 ] 

Hadoop QA commented on HBASE-17297:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 40s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 129m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842821/17297.master.patch |
| JIRA Issue | HBASE-17297 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux ba249ead1316 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1615f45 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4883/testReport/ |
| modules | C: hbase-client hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4883/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Single Filter in parenthesis cannot be parsed correctly

[jira] [Resolved] (HBASE-16115) Missing security context in RegionObserver coprocessor when a compaction/split is triggered manually

2016-12-12 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-16115.
---
Resolution: Won't Fix

> Missing security context in RegionObserver coprocessor when a 
> compaction/split is triggered manually
> 
>
> Key: HBASE-16115
> URL: https://issues.apache.org/jira/browse/HBASE-16115
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.20
>Reporter: Lars Hofhansl
>
> We ran into an interesting phenomenon which can easily render a cluster 
> unusable.
> We loaded some tests data into a test table and forced a manual compaction 
> through the UI. We have some compaction hooks implemented in a region 
> observer, which writes back to another HBase table when the compaction 
> finishes. We noticed that this coprocessor is not setup correctly, it seems 
> the security context is missing.
> The interesting part is that this _only_ happens when the compaction is 
> triggere through the UI. Automatic compactions (major or minor) or when 
> triggered via the HBase shell (folling a kinit) work fine. Only the 
> UI-triggered compactions cause this issues and lead to essentially 
> neverending compactions, immovable regions, etc.
> Not sure what exactly the issue is, but I wanted to make sure I capture this.
> [~apurtell], [~ghelmling], FYI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14160) backport hbase-spark module to branch-1

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742707#comment-15742707
 ] 

Ted Yu commented on HBASE-14160:


Apart from HBASE-16246, HBASE-14375 and HBASE-14161, what else needs to be 
addressed ?

> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 14160.branch-1.v1.txt
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14160) backport hbase-spark module to branch-1

2016-12-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742692#comment-15742692
 ] 

Sean Busbey commented on HBASE-14160:
-

-1 there are 5 listed prerequisite issues that are all still open. please 
address them prior to backporting.

> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 14160.branch-1.v1.txt
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14160) backport hbase-spark module to branch-1

2016-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14160:
---
Attachment: 14160.branch-1.v1.txt

> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
>Assignee: Ted Yu
> Fix For: 1.4.0
>
> Attachments: 14160.branch-1.v1.txt
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742667#comment-15742667
 ] 

Ted Yu commented on HBASE-16179:


Can I get some review to move this forward ?

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v16.txt, 16179.v18.txt, 16179.v19.txt, 16179.v19.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14160) backport hbase-spark module to branch-1

2016-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-14160:
--

Assignee: Ted Yu  (was: Sean Busbey)

> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
>Assignee: Ted Yu
> Fix For: 1.4.0
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14160) backport hbase-spark module to branch-1

2016-12-12 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742635#comment-15742635
 ] 

Sean Busbey commented on HBASE-14160:
-

I am not actively working on any of the current 5 blocking tickets.

> backport hbase-spark module to branch-1
> ---
>
> Key: HBASE-14160
> URL: https://issues.apache.org/jira/browse/HBASE-14160
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0
>
>
> Once the hbase-spark module gets polished a bit, we should backport to 
> branch-1 so we can publish it sooner.
> needs (per previous discussion):
> * examples refactored into different module
> * user facing documentation
> * defined public API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742549#comment-15742549
 ] 

Ted Yu commented on HBASE-17297:


lgtm

> Single Filter in parenthesis cannot be parsed correctly
> ---
>
> Key: HBASE-17297
> URL: https://issues.apache.org/jira/browse/HBASE-17297
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Xuesen Liang
> Attachments: 17297.master.patch
>
>
> {code}
> hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
> 0 row(s) in 0.1800 seconds
> hbase(main):011:0> scan 'testtable'
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> (ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
> 0 row(s) in 0.0100 seconds
> {code}
> In the last scan, we should got a row.
> The inner cause is that the last filter is parsed incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17289) Avoid adding a replication peer named "lock"

2016-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17289:
---
Fix Version/s: 1.4.0

Waiting for branch-1.3 to open.

> Avoid adding a replication peer named "lock"
> 
>
> Key: HBASE-17289
> URL: https://issues.apache.org/jira/browse/HBASE-17289
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 1.4.0
>
> Attachments: HBASE-17289-branch-1.1.patch, 
> HBASE-17289-branch-1.2.patch, HBASE-17289-branch-1.3.patch, 
> HBASE-17289-branch-1.patch
>
>
> When zk based replication queue is used and useMulti is false, the steps of 
> transfer replication queues are first add a lock, then copy nodes, finally 
> clean old queue and the lock. And the default lock znode's name is "lock". So 
> we should avoid adding a peer named "lock". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17294) External Configuration for Memory Compaction

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742514#comment-15742514
 ] 

Ted Yu edited comment on HBASE-17294 at 12/12/16 5:33 PM:
--

Add comment of short description for each type in the enum:
{code}
+  public enum MemoryCompaction {
+NONE,
+BASIC,
+EAGER
{code}
{code}
-return DEFAULT_IN_MEMORY_COMPACTION;
+return null;
{code}
Should NONE be returned above ?
{code}
+import static org.apache.hadoop.hbase.HColumnDescriptor.*;
+import static org.apache.hadoop.hbase.HColumnDescriptor.MemoryCompaction.*;
{code}
Please try not to use wildcard in import.



was (Author: yuzhih...@gmail.com):
Add comment of short description for each type in the enum:
{code}
+  public enum MemoryCompaction {
+NONE,
+BASIC,
+EAGER
{code}
{code}
-return DEFAULT_IN_MEMORY_COMPACTION;
+return null;
{code}
Should NONE be returned above ?
{code}
+import static org.apache.hadoop.hbase.HColumnDescriptor.*;
+import static org.apache.hadoop.hbase.HColumnDescriptor.MemoryCompaction.*;
{code}
Please try not to use wildcard in import.

Please check failed tests.

> External Configuration for Memory Compaction 
> -
>
> Key: HBASE-17294
> URL: https://issues.apache.org/jira/browse/HBASE-17294
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17294-V01.patch, HBASE-17294-V02.patch
>
>
> We would like to have a single external knob to control memstore compaction.
> Possible memstore compaction policies are none, basic, and eager.
> This sub-task allows to set this property at the column family level at table 
> creation time:
> {code}
> create ‘’,
>{NAME => ‘’, 
> IN_MEMORY_COMPACTION => ‘’}
> {code}
> or to set this at the global configuration level by setting the property in 
> hbase-site.xml, with BASIC being the default value:
> {code}
> 
>   hbase.hregion.compacting.memstore.type
>   
> 
> {code}
> The values used in this property can change as memstore compaction policies 
> evolve over time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17297:
---
Status: Patch Available  (was: Open)

> Single Filter in parenthesis cannot be parsed correctly
> ---
>
> Key: HBASE-17297
> URL: https://issues.apache.org/jira/browse/HBASE-17297
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Xuesen Liang
> Attachments: 17297.master.patch
>
>
> {code}
> hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
> 0 row(s) in 0.1800 seconds
> hbase(main):011:0> scan 'testtable'
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> (ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
> 0 row(s) in 0.0100 seconds
> {code}
> In the last scan, we should got a row.
> The inner cause is that the last filter is parsed incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17294) External Configuration for Memory Compaction

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742529#comment-15742529
 ] 

Hadoop QA commented on HBASE-17294:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 1s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 1s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
1s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 24s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 90m 12s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 6s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
35s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 141m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842811/HBASE-17294-V02.patch 
|
| JIRA Issue | HBASE-17294 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  rubocop  ruby_lint  |
| uname | Linux 6965e0607dfc 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1615f45 |

[jira] [Commented] (HBASE-17276) Reduce log spam from WrongRegionException in large multi()'s

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742523#comment-15742523
 ] 

Hudson commented on HBASE-17276:


FAILURE: Integrated in Jenkins build HBase-0.98-on-Hadoop-1.1 #1297 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/1297/])
HBASE-17276 Only log stacktraces for exceptions once for updates in a (elserj: 
rev a1ca72344498731e2751fe74d4387797610a9ab0)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestObservedExceptionsInBatch.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Reduce log spam from WrongRegionException in large multi()'s
> 
>
> Key: HBASE-17276
> URL: https://issues.apache.org/jira/browse/HBASE-17276
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.4.0, 0.98.24
>
> Attachments: HBASE-17276.001.patch, HBASE-17276.002.patch
>
>
> The following spam drives me up a wall in the regionserver log:
> {noformat}
> 2016-12-05 05:53:05,085 WARN  
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=16020] 
> regionserver.HRegion: Batch mutation had a row that does not belong to this 
> region
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for doMiniBatchMutation on HRegion 
> IntegrationTestReplicationSinkRestart,L\xCC\xCC\xCC\xCC\xCC\xCC\xC8,1480916713541.caab3310166699287b54b72b35b29431.,
>  startKey='L\xCC\xCC\xCC\xCC\xCC\xCC\xC8', 
> getEndKey()='Y\x99\x99\x99\x99\x99\x99\x94', 
> row='\x0C\xD2\xA5\xA3\x99\xC7\xE0Q!\x15^\xA6\x90\x1E\xA3\xAD'
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:5211)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkAndPrepareMutation(HRegion.java:3879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3040)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2933)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2875)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:717)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:679)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2056)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2016-12-05 05:53:05,086 WARN  
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=16020] 
> regionserver.HRegion: Batch mutation had a row that does not belong to this 
> region
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for doMiniBatchMutation on HRegion 
> IntegrationTestReplicationSinkRestart,L\xCC\xCC\xCC\xCC\xCC\xCC\xC8,1480916713541.caab3310166699287b54b72b35b29431.,
>  startKey='L\xCC\xCC\xCC\xCC\xCC\xCC\xC8', 
> getEndKey()='Y\x99\x99\x99\x99\x99\x99\x94', 
> row='\x0E\xE7\xFA[\x8D\x93;\xF4\xC7F\xF9\x85\x84\x85\xF3\x0E'
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:5211)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkAndPrepareMutation(HRegion.java:3879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3040)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2933)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2875)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:717)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:679)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2056)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2016-12-05 05:53:05,087 WARN  
> 

[jira] [Commented] (HBASE-17294) External Configuration for Memory Compaction

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742514#comment-15742514
 ] 

Ted Yu commented on HBASE-17294:


Add comment of short description for each type in the enum:
{code}
+  public enum MemoryCompaction {
+NONE,
+BASIC,
+EAGER
{code}
{code}
-return DEFAULT_IN_MEMORY_COMPACTION;
+return null;
{code}
Should NONE be returned above ?
{code}
+import static org.apache.hadoop.hbase.HColumnDescriptor.*;
+import static org.apache.hadoop.hbase.HColumnDescriptor.MemoryCompaction.*;
{code}
Please try not to use wildcard in import.

Please check failed tests.

> External Configuration for Memory Compaction 
> -
>
> Key: HBASE-17294
> URL: https://issues.apache.org/jira/browse/HBASE-17294
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17294-V01.patch, HBASE-17294-V02.patch
>
>
> We would like to have a single external knob to control memstore compaction.
> Possible memstore compaction policies are none, basic, and eager.
> This sub-task allows to set this property at the column family level at table 
> creation time:
> {code}
> create ‘’,
>{NAME => ‘’, 
> IN_MEMORY_COMPACTION => ‘’}
> {code}
> or to set this at the global configuration level by setting the property in 
> hbase-site.xml, with BASIC being the default value:
> {code}
> 
>   hbase.hregion.compacting.memstore.type
>   
> 
> {code}
> The values used in this property can change as memstore compaction policies 
> evolve over time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742493#comment-15742493
 ] 

Ted Yu commented on HBASE-17081:


Triggered a new QA run.
Previous one (#4880) mysteriously stopped:
{code}
HBASE-17081 patch is being downloaded at Mon Dec 12 10:28:51 UTC 2016 from
  
https://issues.apache.org/jira/secure/attachment/12842776/HBaseMeetupDecember2016-V02.pptx
 -> Downloaded
ERROR: Unsure how to process HBASE-17081.
{code}

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> HBASE-17081-V06.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Xuesen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuesen Liang updated HBASE-17297:
-
Attachment: 17297.master.patch

> Single Filter in parenthesis cannot be parsed correctly
> ---
>
> Key: HBASE-17297
> URL: https://issues.apache.org/jira/browse/HBASE-17297
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Reporter: Xuesen Liang
> Attachments: 17297.master.patch
>
>
> {code}
> hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
> 0 row(s) in 0.1800 seconds
> hbase(main):011:0> scan 'testtable'
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
>  row1column=cf1:a, 
> timestamp=1481538756308, value=ab
> hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
> (ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
> ROW  COLUMN+CELL
> 0 row(s) in 0.0100 seconds
> {code}
> In the last scan, we should got a row.
> The inner cause is that the last filter is parsed incorrectly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17297) Single Filter in parenthesis cannot be parsed correctly

2016-12-12 Thread Xuesen Liang (JIRA)
Xuesen Liang created HBASE-17297:


 Summary: Single Filter in parenthesis cannot be parsed correctly
 Key: HBASE-17297
 URL: https://issues.apache.org/jira/browse/HBASE-17297
 Project: HBase
  Issue Type: Bug
  Components: Filters
Reporter: Xuesen Liang


{code}
hbase(main):010:0* put 'testtable', 'row1', 'cf1:a', 'ab'
0 row(s) in 0.1800 seconds

hbase(main):011:0> scan 'testtable'
ROW  COLUMN+CELL
 row1column=cf1:a, timestamp=1481538756308, 
value=ab

hbase(main):012:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:ab'))"
ROW  COLUMN+CELL
 row1column=cf1:a, timestamp=1481538756308, 
value=ab

hbase(main):013:0* scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
ValueFilter(=,'binary:y')) OR ValueFilter(=,'binary:ab')"
ROW  COLUMN+CELL
 row1column=cf1:a, timestamp=1481538756308, 
value=ab

hbase(main):014:0> scan 'testtable', FILTER=>"(ValueFilter(=,'binary:x') AND 
(ValueFilter(=,'binary:y'))) OR ValueFilter(=,'binary:ab')"
ROW  COLUMN+CELL
0 row(s) in 0.0100 seconds
{code}



In the last scan, we should got a row.
The inner cause is that the last filter is parsed incorrectly.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15742279#comment-15742279
 ] 

Ted Yu commented on HBASE-17295:


Can you attach (redacted) master log starting when 'namespace,,1456...' first 
appeared ?

> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {code:title=HRegion#checkSplit}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> //.
> }
> {code}
> But recently,  I see two namespace regions  in our production deployment. It 
> may be cased by restarting when cluster is in certain state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Attachment: (was: bug.PNG)

> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {code:title=HRegion#checkSplit}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> //.
> }
> {code}
> But recently,  I see two namespace regions  in our production deployment. It 
> may be cased by restarting when cluster is in certain state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17294) External Configuration for Memory Compaction

2016-12-12 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-17294:
--
Attachment: HBASE-17294-V02.patch

> External Configuration for Memory Compaction 
> -
>
> Key: HBASE-17294
> URL: https://issues.apache.org/jira/browse/HBASE-17294
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17294-V01.patch, HBASE-17294-V02.patch
>
>
> We would like to have a single external knob to control memstore compaction.
> Possible memstore compaction policies are none, basic, and eager.
> This sub-task allows to set this property at the column family level at table 
> creation time:
> {code}
> create ‘’,
>{NAME => ‘’, 
> IN_MEMORY_COMPACTION => ‘’}
> {code}
> or to set this at the global configuration level by setting the property in 
> hbase-site.xml, with BASIC being the default value:
> {code}
> 
>   hbase.hregion.compacting.memstore.type
>   
> 
> {code}
> The values used in this property can change as memstore compaction policies 
> evolve over time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Attachment: bug.PNG

> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {code:title=HRegion#checkSplit}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> //.
> }
> {code}
> But recently,  I see two namespace regions  in our production deployment. It 
> may be cased by restarting when cluster is in certain state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17291) Remove ImmutableSegment#getKeyValueScanner

2016-12-12 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15741903#comment-15741903
 ] 

Anastasia Braginsky commented on HBASE-17291:
-

Great [~ram_krish]! Although we should commit HBASE-17081, prior to starting 
with this one :)

> Remove ImmutableSegment#getKeyValueScanner
> --
>
> Key: HBASE-17291
> URL: https://issues.apache.org/jira/browse/HBASE-17291
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
>
> This is based on a discussion over [~anastas]'s patch. The MemstoreSnapshot 
> uses a KeyValueScanner which actually seems redundant considering we already 
> have a SegmentScanner. The idea is that the snapshot scanner should be a 
> simple iterator type of scanner but it lacks the capability to do the 
> reference counting on that segment that is now used in snapshot. With 
> snapshot having mulitple segments in the latest impl it is better we hold on 
> to the segment by doing ref counting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17296) Provide per peer throttling for replication

2016-12-12 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-17296:
--

 Summary: Provide per peer throttling for replication
 Key: HBASE-17296
 URL: https://issues.apache.org/jira/browse/HBASE-17296
 Project: HBase
  Issue Type: Improvement
  Components: Replication
Reporter: Guanghao Zhang


HBASE-9501 added a config to provide throttling for replication. And each peer 
has same bandwidth up limit. In our use case, one cluster may have several 
peers and several slave clusters. Each slave cluster may have different scales 
and need different bandwidth up limit for each peer. So We add bandwidth to 
replication peer config and provide a shell cmd set_peer bandwidth to update 
the bandwidth in need. It has been used for a long time on our clusters.  Any 
suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17276) Reduce log spam from WrongRegionException in large multi()'s

2016-12-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15741858#comment-15741858
 ] 

Hudson commented on HBASE-17276:


FAILURE: Integrated in Jenkins build HBase-0.98-matrix #426 (See 
[https://builds.apache.org/job/HBase-0.98-matrix/426/])
HBASE-17276 Only log stacktraces for exceptions once for updates in a (elserj: 
rev a1ca72344498731e2751fe74d4387797610a9ab0)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestObservedExceptionsInBatch.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Reduce log spam from WrongRegionException in large multi()'s
> 
>
> Key: HBASE-17276
> URL: https://issues.apache.org/jira/browse/HBASE-17276
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Minor
> Fix For: 2.0.0, 1.4.0, 0.98.24
>
> Attachments: HBASE-17276.001.patch, HBASE-17276.002.patch
>
>
> The following spam drives me up a wall in the regionserver log:
> {noformat}
> 2016-12-05 05:53:05,085 WARN  
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=16020] 
> regionserver.HRegion: Batch mutation had a row that does not belong to this 
> region
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for doMiniBatchMutation on HRegion 
> IntegrationTestReplicationSinkRestart,L\xCC\xCC\xCC\xCC\xCC\xCC\xC8,1480916713541.caab3310166699287b54b72b35b29431.,
>  startKey='L\xCC\xCC\xCC\xCC\xCC\xCC\xC8', 
> getEndKey()='Y\x99\x99\x99\x99\x99\x99\x94', 
> row='\x0C\xD2\xA5\xA3\x99\xC7\xE0Q!\x15^\xA6\x90\x1E\xA3\xAD'
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:5211)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkAndPrepareMutation(HRegion.java:3879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3040)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2933)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2875)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:717)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:679)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2056)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2016-12-05 05:53:05,086 WARN  
> [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=16020] 
> regionserver.HRegion: Batch mutation had a row that does not belong to this 
> region
> org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested row out 
> of range for doMiniBatchMutation on HRegion 
> IntegrationTestReplicationSinkRestart,L\xCC\xCC\xCC\xCC\xCC\xCC\xC8,1480916713541.caab3310166699287b54b72b35b29431.,
>  startKey='L\xCC\xCC\xCC\xCC\xCC\xCC\xC8', 
> getEndKey()='Y\x99\x99\x99\x99\x99\x99\x94', 
> row='\x0E\xE7\xFA[\x8D\x93;\xF4\xC7F\xF9\x85\x84\x85\xF3\x0E'
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkRow(HRegion.java:5211)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkAndPrepareMutation(HRegion.java:3879)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3040)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2933)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2875)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:717)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:679)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2056)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> 2016-12-05 05:53:05,087 WARN  
> 

[jira] [Commented] (HBASE-17294) External Configuration for Memory Compaction

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15741760#comment-15741760
 ] 

Hadoop QA commented on HBASE-17294:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 48s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 117m 50s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 32s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 179m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient |
|   | hadoop.hbase.master.TestTableLockManager |
|   | hadoop.hbase.regionserver.TestPerColumnFamilyFlush |
|   | hadoop.hbase.client.TestSnapshotCloneIndependence |
|   | hadoop.hbase.snapshot.TestFlushSnapshotFromClient |
|   | hadoop.hbase.client.TestMobSnapshotCloneIndependence |
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.wal.TestAsyncLogRolling |
|   | org.apache.hadoop.hbase.regionserver.wal.TestLogRolling |
|   | org.apache.hadoop.hbase.client.TestMobSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 

[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Description: 
>From the codes, hbase namespace meta table should not allowed to be split.

{code:title=HRegion#checkSplit}
public byte[] checkSplit() {
// Can't split META
if (this.getRegionInfo().isMetaTable() ||
TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) 
{
  if (shouldForceSplit()) {
LOG.warn("Cannot split meta region in HBase 0.20 and above");
  }
  return null;
}
//.
}
{code}

But recently,  I see two namespace regions  in our production deployment. It 
may be cased by restarting when cluster is in certain state.

  was:
>From the codes, hbase namespace meta table should not allowed to be split.

{quote}
public byte[] checkSplit() {
// Can't split META
if (this.getRegionInfo().isMetaTable() ||
TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) 
{
  if (shouldForceSplit()) {
LOG.warn("Cannot split meta region in HBase 0.20 and above");
  }
  return null;
}
{quote}

But recently,  I see two namespace regions  in our production deployment. It 
may be cased by restarting when cluster is in certain state.


> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {code:title=HRegion#checkSplit}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> //.
> }
> {code}
> But recently,  I see two namespace regions  in our production deployment. It 
> may be cased by restarting when cluster is in certain state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Description: 
>From the codes, hbase namespace meta table should not allowed to be split.

{quote}
public byte[] checkSplit() {
// Can't split META
if (this.getRegionInfo().isMetaTable() ||
TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) 
{
  if (shouldForceSplit()) {
LOG.warn("Cannot split meta region in HBase 0.20 and above");
  }
  return null;
}
{quote}

But recently,  I see two namespace regions  in our production deployment. It 
may be cased by restarting when cluster is in certain state.

  was:
>From the codes, hbase namespace meta table should not allowed to be split.

{quote}
public byte[] checkSplit() {
// Can't split META
if (this.getRegionInfo().isMetaTable() ||
TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) 
{
  if (shouldForceSplit()) {
LOG.warn("Cannot split meta region in HBase 0.20 and above");
  }
  return null;
}
{quote}

But recently,  I see two namespace regions  in our production deployment. it 
may cased by restart clusters.



> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {quote}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> {quote}
> But recently,  I see two namespace regions  in our production deployment. It 
> may be cased by restarting when cluster is in certain state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Description: 
>From the codes, hbase namespace meta table should not allowed to be split.

{quote}
public byte[] checkSplit() {
// Can't split META
if (this.getRegionInfo().isMetaTable() ||
TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) 
{
  if (shouldForceSplit()) {
LOG.warn("Cannot split meta region in HBase 0.20 and above");
  }
  return null;
}
{quote}

But recently,  I see two namespace regions  in our production deployment. it 
may cased by restart clusters.


> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>
> From the codes, hbase namespace meta table should not allowed to be split.
> {quote}
> public byte[] checkSplit() {
> // Can't split META
> if (this.getRegionInfo().isMetaTable() ||
> 
> TableName.NAMESPACE_TABLE_NAME.equals(this.getRegionInfo().getTable())) {
>   if (shouldForceSplit()) {
> LOG.warn("Cannot split meta region in HBase 0.20 and above");
>   }
>   return null;
> }
> {quote}
> But recently,  I see two namespace regions  in our production deployment. it 
> may cased by restart clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Attachment: bug.PNG

see the attached pictures, you fill find that there are 2 regions with same 
start and end key for hbase:namesapce table

> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: bug.PNG
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Attachment: (was: Capture.PNG)

> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang chen updated HBASE-17295:
---
Attachment: Capture.PNG

> The namespace table has two regions
> ---
>
> Key: HBASE-17295
> URL: https://issues.apache.org/jira/browse/HBASE-17295
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Chang chen
> Attachments: Capture.PNG
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17295) The namespace table has two regions

2016-12-12 Thread Chang chen (JIRA)
Chang chen created HBASE-17295:
--

 Summary: The namespace table has two regions
 Key: HBASE-17295
 URL: https://issues.apache.org/jira/browse/HBASE-17295
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Chang chen






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15741574#comment-15741574
 ] 

Anastasia Braginsky commented on HBASE-17081:
-

Hi All!

There happened some delay here due to traveling to SFO and giving a small talk 
there about what we are doing here with memory flushes and compaction. I attach 
the presentation here, you might be interested to look on the last read 
performance graphs. Hereby (and on RB), I attach the last (really last) patch! 
:-)

I have referenced all the comments in the RB. As I know you are not getting 
updated on my answers there, I take this to encourage you to take a look on my 
answers there. The important difference in the last patch is that the composite 
snapshot is turned to be always true (both for IC and DC). This is because we 
have seen a great improvement in read latencies, after combining also DC with 
the composite snapshot.
Any other changes can go in a different JIRA, please commit this one!

Thanks,
Anastasia

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> HBASE-17081-V06.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17081:

Attachment: HBaseMeetupDecember2016-V02.pptx

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> HBASE-17081-V06.patch, HBaseMeetupDecember2016-V02.pptx, 
> Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-12-12 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17081:

Attachment: HBASE-17081-V06.patch

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, 
> HBASE-17081-V03.patch, HBASE-17081-V04.patch, HBASE-17081-V05.patch, 
> HBASE-17081-V06.patch, Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17257) Add column-aliasing capability to hbase-client

2016-12-12 Thread Daniel Vimont (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15741486#comment-15741486
 ] 

Daniel Vimont commented on HBASE-17257:
---

In doing my own code review of my own stuff, I am noting the following:

Given the way things are currently coded, read/write access to an alias-enabled 
Table (i.e. a table with one or more alias-enabled column families) requires 
that the Master server be accessible (due to the need to invoke 
#getTableDescriptor method). I'm pretty sure that things could be altered to 
remove this requirement, but the current constraint is that Master server must 
be accessible.

I have already received one person's assurance (off-line) that there is no need 
to assure access to a table when Master server is down, but in a lot of the 
standard write-ups about HBase, it is touted that basic read-write access to 
tables can continue even when the Master is down. So I wanted to get consensus 
on whether or not alias-enabled tables need to remain accessible (for 
read/write access) when the Master server is inaccessible.

> Add column-aliasing capability to hbase-client
> --
>
> Key: HBASE-17257
> URL: https://issues.apache.org/jira/browse/HBASE-17257
> Project: HBase
>  Issue Type: New Feature
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Daniel Vimont
>Assignee: Daniel Vimont
>  Labels: features
> Attachments: HBASE-17257-v2.patch, HBASE-17257-v3.patch, 
> HBASE-17257.patch
>
>
> Review Board link: https://reviews.apache.org/r/54635/
> Column aliasing will provide the option for a 1, 2, or 4 byte alias value to 
> be stored in each cell of an "alias enabled" column-family, in place of the 
> full-length column-qualifier. Aliasing is intended to operate completely 
> invisibly to the end-user developer, with absolutely no "awareness" of 
> aliasing required to be coded into a front-end application. No new public 
> hbase-client interfaces are to be introduced, and only a few new public 
> methods should need to be added to existing interfaces, primarily to allow an 
> administrator to designate that a new column-family is to be alias-enabled by 
> setting its aliasSize attribute to 1, 2, or 4.
> To facilitate such functionality, new subclasses of HTable, 
> BufferedMutatorImpl, and HTableMultiplexer are to be provided. The overriding 
> methods of these new subclasses will invoke methods of the new AliasManager 
> class to facilitate qualifier-to-alias conversions (for user-submitted Gets, 
> Scans, and Mutations) and alias-to-qualifier conversions (for Results 
> returned from HBase) for any Table that has one or more alias-enabled column 
> families. All conversion logic will be encapsulated in the new AliasManager 
> class, and all qualifier-to-alias mappings will be persisted in a new 
> aliasMappingTable in a new, reserved namespace.
> An informal polling of HBase users at HBaseCon East and at the 
> Strata/Hadoop-World conference in Sept. 2016 showed that Column Aliasing 
> could be a popular enhancement to standard HBase functionality, due to the 
> fact that full column-qualifiers are stored in each cell, and reducing this 
> qualifier storage requirement down to 1, 2, or 4 bytes per cell could prove 
> beneficial in terms of reduced storage and bandwidth needs. Aliasing is 
> intended chiefly for column-families which are of the "narrow and tall" 
> variety (i.e., that are designed to use relatively few distinct 
> column-qualifiers throughout a large number of rows, throughout the lifespan 
> of the column-family). A column-family that is set up with an alias-size of 1 
> byte can contain up to 255 unique column-qualifiers; a 2 byte alias-size 
> allows for up to 65,535 unique column-qualifiers; and a 4 byte alias-size 
> allows for up to 4,294,967,295 unique column-qualifiers.
> Fuller specifications will be entered into the comments section below. Note 
> that it may well not be viable to add aliasing support in the new "async" 
> classes that appear to be currently under development.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17292) Add observer notification before bulk loaded hfile is moved to region directory

2016-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15741429#comment-15741429
 ] 

Hadoop QA commented on HBASE-17292:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 0s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 33s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 127m 9s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842761/17292.v1.txt |
| JIRA Issue | HBASE-17292 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2fecb9d06598 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1615f45 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4878/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4878/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add observer notification before bulk loaded hfile is moved to region 
> directory
> 

  1   2   >