[jira] [Updated] (HBASE-18129) truncate_preserve fails when the truncate method doesn't exists on the master

2017-05-26 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-18129:
--
Attachment: HBASE-18129-branch-1.patch

> truncate_preserve fails when the truncate method doesn't exists on the master
> -
>
> Key: HBASE-18129
> URL: https://issues.apache.org/jira/browse/HBASE-18129
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0, 1.2.5
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-18129-branch-1.patch
>
>
> Recently, I runs a rolling upgrade from HBase 0.98.x to HBase 1.2.5. During 
> the master hasn't been upgraded yet, I truncate a table by the command 
> truncate_preserve of 1.2.5, but failed.
> {code}
> hbase(main):001:0> truncate_preserve 'cf_logs'
> Truncating 'cf_logs' table (it may take a while):
>  - Disabling table...
>  - Truncating table...
>  - Dropping table...
>  - Creating table with region boundaries...
> ERROR: no method 'createTable' for arguments 
> (org.apache.hadoop.hbase.HTableDescriptor,org.jruby.java.proxies.ArrayJavaProxy)
>  on Java::OrgApacheHadoopHbaseClient::HBaseAdmin
> {code}
> After checking code and commit history, I found it's HBASE-12833 which causes 
> this bug.so we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18122:
--
Attachment: HBASE-18122.v03.patch

Fix failed tests. Construct the generator in RSRpcService.start(), at 
constructor of RSRpcServices the servername of hregionserver is null.

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch, HBASE-18122.v02.patch, 
> HBASE-18122.v03.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18129) truncate_preserve fails when the truncate method doesn't exists on the master

2017-05-26 Thread Guangxu Cheng (JIRA)
Guangxu Cheng created HBASE-18129:
-

 Summary: truncate_preserve fails when the truncate method doesn't 
exists on the master
 Key: HBASE-18129
 URL: https://issues.apache.org/jira/browse/HBASE-18129
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.2.5, 2.0.0
Reporter: Guangxu Cheng
Assignee: Guangxu Cheng


Recently, I runs a rolling upgrade from HBase 0.98.x to HBase 1.2.5. During the 
master hasn't been upgraded yet, I truncate a table by the command 
truncate_preserve of 1.2.5, but failed.
{code}
hbase(main):001:0> truncate_preserve 'cf_logs'
Truncating 'cf_logs' table (it may take a while):
 - Disabling table...
 - Truncating table...
 - Dropping table...
 - Creating table with region boundaries...

ERROR: no method 'createTable' for arguments 
(org.apache.hadoop.hbase.HTableDescriptor,org.jruby.java.proxies.ArrayJavaProxy)
 on Java::OrgApacheHadoopHbaseClient::HBaseAdmin
{code}
After checking code and commit history, I found it's HBASE-12833 which causes 
this bug.so we should fix it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027208#comment-16027208
 ] 

Hadoop QA commented on HBASE-18122:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 33m 43s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 3s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.balancer.TestRegionLocationFinder |
|   | hadoop.hbase.ipc.TestNettyRpcServer |
|   | hadoop.hbase.procedure.TestProcedureManager |
|   | hadoop.hbase.master.locking.TestLockManager |
|   | hadoop.hbase.master.locking.TestLockProcedure |
|   | hadoop.hbase.regionserver.TestHRegionFileSystem |
|   | hadoop.hbase.backup.TestBackupHFileCleaner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870184/HBASE-18122.v02.patch 
|
| JIRA Issue | HBASE-18122 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 638f28e64a5f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / efc7edc |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6977/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6977/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027200#comment-16027200
 ] 

Lars Hofhansl commented on HBASE-18027:
---

Looks good. +1

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, 
> HBASE-18027-branch-1.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027195#comment-16027195
 ] 

Hadoop QA commented on HBASE-18122:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
57m 42s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 41s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 113m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.backup.TestBackupHFileCleaner |
|   | hadoop.hbase.regionserver.TestHRegionFileSystem |
|   | hadoop.hbase.master.locking.TestLockProcedure |
|   | hadoop.hbase.master.locking.TestLockManager |
|   | hadoop.hbase.ipc.TestNettyRpcServer |
|   | hadoop.hbase.procedure.TestProcedureManager |
|   | hadoop.hbase.master.balancer.TestRegionLocationFinder |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870045/HBASE-18122.v01.patch 
|
| JIRA Issue | HBASE-18122 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux d82ac962b84d 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 564c193 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6976/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6976/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-15903) Delete Object

2017-05-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027186#comment-16027186
 ] 

Ted Yu commented on HBASE-15903:


w.r.t. SetTimeStamp(timestamp) call, I referenced the following method of 
Delete.java :
{code}
  public Delete setTimestamp(long timestamp) {
{code}
Can you elaborate the correct call ?

> Delete Object
> -
>
> Key: HBASE-15903
> URL: https://issues.apache.org/jira/browse/HBASE-15903
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Ted Yu
> Attachments: 15903.v2.txt, 15903.v4.txt, 
> HBASE-15903.HBASE-14850.v1.patch
>
>
> Patch for creating Delete objects. These Delete objects are used by the Table 
> implementation to delete rowkey from a table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-26 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17678:
-
Attachment: TestColumnPaginationFilterDemo.java

> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across pages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-26 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17678:
-
Attachment: (was: TestColumnPaginationFilterDemo.java)

> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single (not-duplicated)  within a page, but not across pages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-26 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027181#comment-16027181
 ] 

Zheng Hu edited comment on HBASE-17678 at 5/27/17 3:54 AM:
---

Assume that we have following 4 cells in a table:

{code}
create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
put 'ns:tbl','row','family:name','value-0',1
put 'ns:tbl','row','family:name','value-1',1
put 'ns:tbl','row','family:name','value-2',1
put 'ns:tbl','row','family:name','value-3',1
{code}

and, we try to do a Get by a filter: 

{code}
get = new Get("row").setFilter(new FilterList(Operator.MUST_PASS_ONE, 
ColumnPaginationFilter(1,1)));
{code}


Let's track the problem cell by cell for ScanQueryMatcher.match:

step.1 ScanQueryMatcher encounter value-0

For ColumnPaginationFilter, its count = 0, offset = 1, limit =1 , so, the 
filter return NEXT_COL;
For FilterList, its operator = Operator.MUST_PASS_ONE and find that the return 
code of ColumnPaginationFilter is NEXT_COL, So the FilterList return SKIP; 

step.2 ScanQueryMatcher encounter value-1

For ColumnPaginationFilter, its count = 1, offset = 1, limit =1, so , the 
filter return INCLUDE_AND_NEXT_COL; 
For FilterList, its operator = Operator.MUST_PASS_ONE and find that the return 
code of ColumnPaginationFilter is INCLUDE_AND_NEXT_COL, So the FilterList 
return INCLUDE_AND_NEXT_COL;

Here, we found that our ScanQueryMatcher would return value-1 to user, but in 
fact we shouldn't do that because both value-0 and value-1 are the same column 
whose column offset is 0 (not requested offset 1).



was (Author: openinx):
Assume that we have following 4 cells in a table:

{code}
create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
put 'ns:tbl','row','family:name','value-0',1
put 'ns:tbl','row','family:name','value-1',1
put 'ns:tbl','row','family:name','value-2',1
put 'ns:tbl','row','family:name','value-3',1
{code}

and, we try to do a Get by a filter: 

{code}
get = new Get("row").setFilter(new FilterList(Operator.MUST_PASS_ONE, 
ColumnPaginationFilter(1,1)));
{code}


Let's track the problem cell by cell for ScanQueryMatcher.match:

# step.1 ScanQueryMatcher encounter value-0

For ColumnPaginationFilter, its count = 0, offset = 1, limit =1 , so, the 
filter return NEXT_COL;
For FilterList, its operator = Operator.MUST_PASS_ONE and find that the return 
code of ColumnPaginationFilter is NEXT_COL, So the FilterList return SKIP; 

# step.2 ScanQueryMatcher encounter value-1

For ColumnPaginationFilter, its count = 1, offset = 1, limit =1, so , the 
filter return INCLUDE_AND_NEXT_COL; 
For FilterList, its operator = Operator.MUST_PASS_ONE and find that the return 
code of ColumnPaginationFilter is INCLUDE_AND_NEXT_COL, So the FilterList 
return INCLUDE_AND_NEXT_COL;

Here, we found that our ScanQueryMatcher would return value-1 to user, but in 
fact we shouldn't do that because both value-0 and value-1 are the same column 
whose column offset is 0 (not requested offset 1).


> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 

[jira] [Commented] (HBASE-17678) ColumnPaginationFilter in a FilterList gives different results when using MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given timestamp

2017-05-26 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027181#comment-16027181
 ] 

Zheng Hu commented on HBASE-17678:
--

Assume that we have following 4 cells in a table:

{code}
create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
put 'ns:tbl','row','family:name','value-0',1
put 'ns:tbl','row','family:name','value-1',1
put 'ns:tbl','row','family:name','value-2',1
put 'ns:tbl','row','family:name','value-3',1
{code}

and, we try to do a Get by a filter: 

{code}
get = new Get("row").setFilter(new FilterList(Operator.MUST_PASS_ONE, 
ColumnPaginationFilter(1,1)));
{code}


Let's track the problem cell by cell for ScanQueryMatcher.match:

# step.1 ScanQueryMatcher encounter value-0

For ColumnPaginationFilter, its count = 0, offset = 1, limit =1 , so, the 
filter return NEXT_COL;
For FilterList, its operator = Operator.MUST_PASS_ONE and find that the return 
code of ColumnPaginationFilter is NEXT_COL, So the FilterList return SKIP; 

# step.2 ScanQueryMatcher encounter value-1

For ColumnPaginationFilter, its count = 1, offset = 1, limit =1, so , the 
filter return INCLUDE_AND_NEXT_COL; 
For FilterList, its operator = Operator.MUST_PASS_ONE and find that the return 
code of ColumnPaginationFilter is INCLUDE_AND_NEXT_COL, So the FilterList 
return INCLUDE_AND_NEXT_COL;

Here, we found that our ScanQueryMatcher would return value-1 to user, but in 
fact we shouldn't do that because both value-0 and value-1 are the same column 
whose column offset is 0 (not requested offset 1).


> ColumnPaginationFilter in a FilterList gives different results when using 
> MUST_PASS_ONE vs MUST_PASS_ALL and a cell has multiple values for a given 
> timestamp
> -
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> 

[jira] [Commented] (HBASE-18118) Default storage policy if not configured cannot be "NONE"

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027179#comment-16027179
 ] 

Hudson commented on HBASE-18118:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #250 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/250/])
HBASE-18118 Default storage policy if not configured cannot be "NONE" 
(apurtell: rev 564c193d61cd1f92688a08a3af6d55ce4c4636d8)
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java


> Default storage policy if not configured cannot be "NONE"
> -
>
> Key: HBASE-18118
> URL: https://issues.apache.org/jira/browse/HBASE-18118
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18118.patch
>
>
> HBase can't use 'NONE' as default storage policy if not configured because 
> HDFS supports no such policy. This policy name was probably available in a 
> precommit or early version of the HDFS side support for heterogeneous 
> storage. Now the best default is 'HOT'. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027180#comment-16027180
 ] 

Hudson commented on HBASE-14614:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #250 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/250/])
HBASE-14614 Procedure v2 - Core Assignment Manager (Matteo Bertozzi) (stack: 
rev 9cd5f2d574cb52ff60494df057de5a5d80b89e34)
* (delete) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestSplitTableRegionProcedure.java
* (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestWarmupRegion.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/ProcedureStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/TruncateTableProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/SimpleMasterProcedureManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureEnv.java
* (edit) hbase-protocol-shaded/src/main/protobuf/Master.proto
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/StateMachineProcedure.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcConnection.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestAddColumnFamilyProcedure.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ResponseConverter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/EnableTableProcedure.java
* (edit) 
hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/QuotaProtos.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestTableDDLProcedureBase.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) hbase-protocol-shaded/src/main/protobuf/RegionServerStatus.proto
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MetricsAssignmentManager.java
* (edit) 
hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFile.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureConstants.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildOverlap.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/RSProcedureDispatcher.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CloneSnapshotProcedure.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/AbstractStateMachineNamespaceProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java
* (edit) 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/AssignmentManagerStatusTmpl.jamon
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestServerBusyException.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotCloneIndependence.java
* (add) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MergeTableRegionsProcedure.java
* (add) 

[jira] [Commented] (HBASE-17777) TestMemstoreLAB#testLABThreading runs too long for a small test

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027178#comment-16027178
 ] 

Hudson commented on HBASE-1:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #250 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/250/])
HBASE-1 TestMemstoreLAB#testLABThreading runs too long for a small 
(ramkrishna: rev 8b5c161cbf0cab4eb250827e20a12acee00b400d)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java


> TestMemstoreLAB#testLABThreading runs too long for a small test
> ---
>
> Key: HBASE-1
> URL: https://issues.apache.org/jira/browse/HBASE-1
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-1_1.patch, HBASE-1_2.patch
>
>
> While working on ChunkCreator/ChunkMap found that the test in 
> TestMSLAB#testLABThreading() runs for almost 5 mins and the whole test is 
> under smallTest category.
> The reason is that we are creating 35*2MB chunks from MSLAB. We try writing 
> data to these chunks until they are 50MB in size.
> And while verifying in order to check if the chunks are not 
> overwritten/overlapped we verify the content of the buffers.
> So we actually keep comparing 50MB buffer n  number of times. I suggest we 
> change this in a way that at max we create chunks whose size is totally at 
> 1MB or may be even lesser and write cells which are smaller in size. By doing 
> this we can drastically reduce the run time of this test. May be something 
> less than 1 min.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18120) Fix TestAsyncRegionAdminApi

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027177#comment-16027177
 ] 

Hudson commented on HBASE-18120:


FAILURE: Integrated in Jenkins build HBase-HBASE-14614 #250 (See 
[https://builds.apache.org/job/HBase-HBASE-14614/250/])
HBASE-18120 (addendum) Fix TestAsyncRegionAdminApi (zghao: rev 
b076b8e794d19c9d552cff6c14100b1fda0cf520)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java


> Fix TestAsyncRegionAdminApi
> ---
>
> Key: HBASE-18120
> URL: https://issues.apache.org/jira/browse/HBASE-18120
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-18003.v2.patch, HBASE-18120.addendum.patch
>
>
> This test fails for me locally. The patch from HBASE-18003 by [~openinx] 
> fixes my failing test so stealing it from that issue and committing here 
> (stealing it because the boys are working on TestAsyncTableAdminApi too over 
> in HBASE-18003). Thanks for the patch [~openinx]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18128) compaction marker could be skipped

2017-05-26 Thread Jingyun Tian (JIRA)
Jingyun Tian created HBASE-18128:


 Summary: compaction marker could be skipped 
 Key: HBASE-18128
 URL: https://issues.apache.org/jira/browse/HBASE-18128
 Project: HBase
  Issue Type: Improvement
  Components: Compaction, regionserver
Reporter: Jingyun Tian


The sequence for a compaction are as follows:
1. Compaction writes new files under region/.tmp directory (compaction output)
2. Compaction atomically moves the temporary file under region directory
3. Compaction appends a WAL edit containing the compaction input and output 
files. Forces sync on WAL.
4. Compaction deletes the input files from the region directory.

But if a flush happened between 3 and 4, then the regionserver crushed. The 
compaction marker will be skipped when splitting log because the sequence id of 
compaction marker is smaller than lastFlushedSequenceId.
{code}
if (lastFlushedSequenceId >= entry.getKey().getLogSeqNum()) {
  editsSkipped++;
  continue;
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18115) Move SaslServer creation to HBaseSaslRpcServer

2017-05-26 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18115:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master.

Thanks [~stack] for reviewing.

> Move SaslServer creation to HBaseSaslRpcServer
> --
>
> Key: HBASE-18115
> URL: https://issues.apache.org/jira/browse/HBASE-18115
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18115.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027174#comment-16027174
 ] 

Phil Yang commented on HBASE-18122:
---

There are two scenes:

A client open a new scanner and its id generated by RS is 1(the first scanner), 
if this RS crash and restart then another client open a new scanner, its id is 
also 1. If the first client send a next() to this RS, it will get the wrong 
results.

As described in HBASE-18121, the scanner callable will retry when there is an 
exception that is not DoNotRetryIOE, but if the region moved to another RS, the 
retry will not open a new scanner to the new RS, it will send next() with the 
wrong scanner id.

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch, HBASE-18122.v02.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18122:
--
Attachment: HBASE-18122.v02.patch

Change to use murmurhash_32

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch, HBASE-18122.v02.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18122:
--
Attachment: (was: HBASE-18122.v02.patch)

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-18122:
--
Attachment: HBASE-18122.v02.patch

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18003) Fix flaky test TestAsyncTableAdminApi

2017-05-26 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027169#comment-16027169
 ] 

Guanghao Zhang commented on HBASE-18003:


I thought this has been fixed by HBASE-18114. Will resolve this as duplicate. 
If it still is flaky, then open it again. Thanks.

> Fix flaky test TestAsyncTableAdminApi
> -
>
> Key: HBASE-18003
> URL: https://issues.apache.org/jira/browse/HBASE-18003
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-18003.v1.patch, HBASE-18003.v2.patch, 
> HBASE-18003.v2.patch
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18003) Fix flaky test TestAsyncTableAdminApi

2017-05-26 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18003:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Fix flaky test TestAsyncTableAdminApi
> -
>
> Key: HBASE-18003
> URL: https://issues.apache.org/jira/browse/HBASE-18003
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-18003.v1.patch, HBASE-18003.v2.patch, 
> HBASE-18003.v2.patch
>
>
> See 
> https://builds.apache.org/job/HBASE-Find-Flaky-Tests/lastSuccessfulBuild/artifact/dashboard.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18114:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and thanks for reviewing.

> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, HBASE-18114-v1.patch, 
> HBASE-18114-v1.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch
>
>
> {code}
> 2017-05-25 17:56:34,967 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> master.HMaster$11(2297): Client=hao//127.0.0.1 disable testModifyColumnFamily
> 2017-05-25 17:56:37,974 INFO  [RpcClient-timer-pool1-t1] 
> client.AsyncHBaseAdmin$TableProcedureBiConsumer(2219): Operation: DISABLE, 
> Table Name: default:testModifyColumnFamily failed with Failed after 
> attempts=3, exceptions: 
> Thu May 25 17:56:35 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=294, waitTime=1008, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=295, waitTime=1299, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=296, waitTime=668, 
> rpcTimeout=660
> 017-05-25 17:56:38,936 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> procedure2.ProcedureExecutor(788): Stored procId=15, owner=hao, 
> state=RUNNABLE:DISABLE_TABLE_PREPARE, DisableTableProcedure 
> table=testModifyColumnFamily
> {code}
> For this disable table procedure, master return the procedure id when it 
> submit the procedure to ProcedureExecutor. And the above procedure take 4 
> seconds to submit. So the disable table call failed because the rpc timeout 
> is 1 seconds and the retry number is 3.
> For admin operation, I thought we don't need change the default timeout 
> config in unit test. And the retry is not need, too. (Or we can set a retry > 
> 1 to test nonce thing). Meanwhile, the default timeout is 60 seconds. So the 
> test type may need change to LargeTests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027165#comment-16027165
 ] 

stack commented on HBASE-18122:
---

Patch looks good [~yangzhe1991] Perhaps use MurmurHash instead of md5?  Good 
distribution, and cheaper than md5 and int32.

This bit I don't follow sir: "If a client has a scanner whose id is x, when the 
RS restart and the scanner id is also incremented to x or a little larger, 
there will be a scanner id collision.
So the scanner id should now be same during each time the RS restart. We can 
add the start timestamp as the highest several bits in scanner id uint64."

Why a collision? The scannerid is an increment on the rs starttime?

Thanks.

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18115) Move SaslServer creation to HBaseSaslRpcServer

2017-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027163#comment-16027163
 ] 

stack commented on HBASE-18115:
---

+1 then.

> Move SaslServer creation to HBaseSaslRpcServer
> --
>
> Key: HBASE-18115
> URL: https://issues.apache.org/jira/browse/HBASE-18115
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18115.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18059) The scanner order for memstore scanners are wrong

2017-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027162#comment-16027162
 ] 

stack commented on HBASE-18059:
---

[~tianjingyun] [~appy] said he'd be by here. I'd value his opinion above mine.  
It might be a while because he is in exotic locations currently (He is back 
Weds..).

> The scanner order for memstore scanners are wrong
> -
>
> Key: HBASE-18059
> URL: https://issues.apache.org/jira/browse/HBASE-18059
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 2.0.0
>
>
> This is comments for KeyValueScanner.getScannerOrder
> {code:title=KeyValueScanner.java}
>   /**
>* Get the order of this KeyValueScanner. This is only relevant for 
> StoreFileScanners and
>* MemStoreScanners (other scanners simply return 0). This is required for 
> comparing multiple
>* files to find out which one has the latest data. StoreFileScanners are 
> ordered from 0
>* (oldest) to newest in increasing order. MemStoreScanner gets LONG.max 
> since it always
>* contains freshest data.
>*/
>   long getScannerOrder();
> {code}
> As now we may have multiple memstore scanners, I think the right way to 
> select scanner order for memstore scanner is to ordered from Long.MAX_VALUE 
> in decreasing order.
> But in CompactingMemStore and DefaultMemStore, the scanner order for memstore 
> scanner is also start from 0, which will be messed up with StoreFileScanners.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18122) Scanner id should include ServerName of region server

2017-05-26 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18122:
--
Status: Patch Available  (was: Open)

> Scanner id should include ServerName of region server
> -
>
> Key: HBASE-18122
> URL: https://issues.apache.org/jira/browse/HBASE-18122
> Project: HBase
>  Issue Type: Bug
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-18122.v01.patch
>
>
> Now the scanner id is a long number from 1 to max in a region server. Each 
> new scanner will have a scanner id.
> If a client has a scanner whose id is x, when the RS restart and the scanner 
> id is also incremented to x or a little larger, there will be a scanner id 
> collision.
> So the scanner id should now be same during each time the RS restart. We can 
> add the start timestamp as the highest several bits in scanner id uint64.
> And because HBASE-18121 is not easy to fix and there are many clients with 
> old version. We can also encode server host:port into the scanner id.
> So we can use ServerName.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18119) Improve HFile readability and modify ChecksumUtil log level

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027159#comment-16027159
 ] 

Hadoop QA commented on HBASE-18119:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 107m 46s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 148m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870169/HBASE-18119-v1.patch |
| JIRA Issue | HBASE-18119 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux d95a55bf7a93 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 564c193 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6975/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6975/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Improve HFile readability and modify ChecksumUtil log level
> ---
>
> Key: HBASE-18119
> URL: https://issues.apache.org/jira/browse/HBASE-18119
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Qilin Cao
> 

[jira] [Commented] (HBASE-15903) Delete Object

2017-05-26 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027154#comment-16027154
 ] 

Enis Soztutar commented on HBASE-15903:
---

Looks good overall. 
bq. In patch v2, I enhanced ClientTest.PutGet with deleting the row just 
written and verifying that subsequent Get returns nothing.
Can you please separate that to a different test method for deletes 
specifically. 

- Also, can you please add a delete-test.cc similar to put-test.cc, 
get-test.cc, etc (to test Delete object, not end to end). 

- You can name this method toMutateRequest(). If not, name it 
DeleteToMutateRequest. 
{code}
+  static std::unique_ptr DelToMutateRequest(const Delete ,const 
std::string _name);
{code}

- In the java side, we do not clear the family map inside addFamilyVersion() 
method: 
{code}
+Delete& Delete::AddFamilyVersion(const std::string& family, int64_t timestamp) 
{
+const auto  = family_map_.find(family);
+if (family_map_.end() != it) {
+it->second.clear();
{code}

- I know this comes from put.cc, but I'm not sure whether this is safe: 
{code}
+  family_map_[cell->Family()].push_back(std::move(cell));
{code} 
If the family is not initiated before, it will segfault? Can you please check. 

- These calls are not right: 
{code}
+SetTimeStamp(timestamp);
{code}

- Can you also test all forms of delete (delete row, delete column, etc). 
- Can you please copy-paste the javadocs from the java side for the Delete 
object's methods. 
- There is Table::Delete(std::vector) to be done after this and the 
multi-put patch. Let's create a jira so that we don't forget about it. 

> Delete Object
> -
>
> Key: HBASE-15903
> URL: https://issues.apache.org/jira/browse/HBASE-15903
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Ted Yu
> Attachments: 15903.v2.txt, 15903.v4.txt, 
> HBASE-15903.HBASE-14850.v1.patch
>
>
> Patch for creating Delete objects. These Delete objects are used by the Table 
> implementation to delete rowkey from a table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-26 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027151#comment-16027151
 ] 

Duo Zhang commented on HBASE-18042:
---

Oh I do not have time either... Will be out for a week...

Let me finish this issue first.

For branch-1.3 all UTs passed, and for branch-1, the failed UT is 
TestReplicasClient.testCancelOfMultiGet. Let me check.

> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18059) The scanner order for memstore scanners are wrong

2017-05-26 Thread Jingyun Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027141#comment-16027141
 ] 

Jingyun Tian commented on HBASE-18059:
--

[~stack] sir, do you think the modification is necessary?  I think the only 
situation that will cause problem is the hfile from bulk import have the same 
sequence ID with cells in memstore.

> The scanner order for memstore scanners are wrong
> -
>
> Key: HBASE-18059
> URL: https://issues.apache.org/jira/browse/HBASE-18059
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 2.0.0
>
>
> This is comments for KeyValueScanner.getScannerOrder
> {code:title=KeyValueScanner.java}
>   /**
>* Get the order of this KeyValueScanner. This is only relevant for 
> StoreFileScanners and
>* MemStoreScanners (other scanners simply return 0). This is required for 
> comparing multiple
>* files to find out which one has the latest data. StoreFileScanners are 
> ordered from 0
>* (oldest) to newest in increasing order. MemStoreScanner gets LONG.max 
> since it always
>* contains freshest data.
>*/
>   long getScannerOrder();
> {code}
> As now we may have multiple memstore scanners, I think the right way to 
> select scanner order for memstore scanner is to ordered from Long.MAX_VALUE 
> in decreasing order.
> But in CompactingMemStore and DefaultMemStore, the scanner order for memstore 
> scanner is also start from 0, which will be messed up with StoreFileScanners.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027135#comment-16027135
 ] 

Duo Zhang commented on HBASE-18114:
---

+1.

> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, HBASE-18114-v1.patch, 
> HBASE-18114-v1.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch
>
>
> {code}
> 2017-05-25 17:56:34,967 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> master.HMaster$11(2297): Client=hao//127.0.0.1 disable testModifyColumnFamily
> 2017-05-25 17:56:37,974 INFO  [RpcClient-timer-pool1-t1] 
> client.AsyncHBaseAdmin$TableProcedureBiConsumer(2219): Operation: DISABLE, 
> Table Name: default:testModifyColumnFamily failed with Failed after 
> attempts=3, exceptions: 
> Thu May 25 17:56:35 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=294, waitTime=1008, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=295, waitTime=1299, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=296, waitTime=668, 
> rpcTimeout=660
> 017-05-25 17:56:38,936 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> procedure2.ProcedureExecutor(788): Stored procId=15, owner=hao, 
> state=RUNNABLE:DISABLE_TABLE_PREPARE, DisableTableProcedure 
> table=testModifyColumnFamily
> {code}
> For this disable table procedure, master return the procedure id when it 
> submit the procedure to ProcedureExecutor. And the above procedure take 4 
> seconds to submit. So the disable table call failed because the rpc timeout 
> is 1 seconds and the retry number is 3.
> For admin operation, I thought we don't need change the default timeout 
> config in unit test. And the retry is not need, too. (Or we can set a retry > 
> 1 to test nonce thing). Meanwhile, the default timeout is 60 seconds. So the 
> test type may need change to LargeTests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027108#comment-16027108
 ] 

Guanghao Zhang commented on HBASE-18114:


Trigger Hadoop QA 8 times. And no failed TestAsync*AdminApi. [~Apache9] Any 
more concerns?

> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, HBASE-18114-v1.patch, 
> HBASE-18114-v1.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch
>
>
> {code}
> 2017-05-25 17:56:34,967 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> master.HMaster$11(2297): Client=hao//127.0.0.1 disable testModifyColumnFamily
> 2017-05-25 17:56:37,974 INFO  [RpcClient-timer-pool1-t1] 
> client.AsyncHBaseAdmin$TableProcedureBiConsumer(2219): Operation: DISABLE, 
> Table Name: default:testModifyColumnFamily failed with Failed after 
> attempts=3, exceptions: 
> Thu May 25 17:56:35 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=294, waitTime=1008, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=295, waitTime=1299, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=296, waitTime=668, 
> rpcTimeout=660
> 017-05-25 17:56:38,936 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> procedure2.ProcedureExecutor(788): Stored procId=15, owner=hao, 
> state=RUNNABLE:DISABLE_TABLE_PREPARE, DisableTableProcedure 
> table=testModifyColumnFamily
> {code}
> For this disable table procedure, master return the procedure id when it 
> submit the procedure to ProcedureExecutor. And the above procedure take 4 
> seconds to submit. So the disable table call failed because the rpc timeout 
> is 1 seconds and the retry number is 3.
> For admin operation, I thought we don't need change the default timeout 
> config in unit test. And the retry is not need, too. (Or we can set a retry > 
> 1 to test nonce thing). Meanwhile, the default timeout is 60 seconds. So the 
> test type may need change to LargeTests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18079) [C++] Optimize ClientBootstrap in ConnectionPool

2017-05-26 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027104#comment-16027104
 ] 

Enis Soztutar commented on HBASE-18079:
---

[~xiaobingo] tests are failing for me with this patch, especially: 
{code}
FAILURE TestConnectionPool TestOnlyCreateMultipleDispose: 
connection/connection-pool-test.cc:90
Actual function call count doesn't match EXPECT_CALL((*mock_cf), 
MakeBootstrap())...
 Expected: to be called twice
   Actual: called once - unsatisfied and active
{code}


> [C++] Optimize ClientBootstrap in ConnectionPool
> 
>
> Key: HBASE-18079
> URL: https://issues.apache.org/jira/browse/HBASE-18079
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18079-HBASE-14850.000.patch
>
>
> ConnectionPool creates an instance of wangle::ClientBootstrap for every new 
> connection (i.e. Wangle pipeline) and cache it hereafter, it is unnecessary. 
> Instead, ConnectionPool can maintain a single ClientBootstrap instance in its 
> members.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18115) Move SaslServer creation to HBaseSaslRpcServer

2017-05-26 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027075#comment-16027075
 ] 

Duo Zhang commented on HBASE-18115:
---

{quote}
We have SASL tests?
{quote}

Yes. See TestSecureIPC.

> Move SaslServer creation to HBaseSaslRpcServer
> --
>
> Key: HBASE-18115
> URL: https://issues.apache.org/jira/browse/HBASE-18115
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18115.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18119) Improve HFile readability and modify ChecksumUtil log level

2017-05-26 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao updated HBASE-18119:
--
Attachment: (was: HBASE-18119-v1.patch)

> Improve HFile readability and modify ChecksumUtil log level
> ---
>
> Key: HBASE-18119
> URL: https://issues.apache.org/jira/browse/HBASE-18119
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Qilin Cao
>Assignee: Qilin Cao
>Priority: Minor
> Attachments: HBASE-18119-v1.patch
>
>
> It is confused when I read the HFile.checkHFileVersion method, so I change 
> the source code. Simultaneously, I change ChecksumUtil the info log level to 
> trace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18119) Improve HFile readability and modify ChecksumUtil log level

2017-05-26 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao updated HBASE-18119:
--
Attachment: HBASE-18119-v1.patch

> Improve HFile readability and modify ChecksumUtil log level
> ---
>
> Key: HBASE-18119
> URL: https://issues.apache.org/jira/browse/HBASE-18119
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Qilin Cao
>Assignee: Qilin Cao
>Priority: Minor
> Attachments: HBASE-18119-v1.patch
>
>
> It is confused when I read the HFile.checkHFileVersion method, so I change 
> the source code. Simultaneously, I change ChecksumUtil the info log level to 
> trace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15570) renewable delegation tokens for long-lived spark applications

2017-05-26 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027001#comment-16027001
 ] 

Mike Drob commented on HBASE-15570:
---

[~busbey] - I think SPARK-12523 is the one that fixes this, not SPARK-14743, 
available in Spark 2.0

I'm not sure that a fix to the Spark 1.6.z line will be coming.

> renewable delegation tokens for long-lived spark applications
> -
>
> Key: HBASE-15570
> URL: https://issues.apache.org/jira/browse/HBASE-15570
> Project: HBase
>  Issue Type: Improvement
>  Components: spark
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>
> Right now our spark integration works on secure clusters by getting 
> delegation tokens and sending them to the executors. Unfortunately, 
> applications that need to run for longer than the delegation token lifetime 
> (by default 7 days) will fail.
> In particular, this is an issue for Spark Streaming applications. Since they 
> expect to run indefinitely, we should have a means for renewing the 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18118) Default storage policy if not configured cannot be "NONE"

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026950#comment-16026950
 ] 

Hudson commented on HBASE-18118:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3082 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3082/])
HBASE-18118 Default storage policy if not configured cannot be "NONE" 
(apurtell: rev 564c193d61cd1f92688a08a3af6d55ce4c4636d8)
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java


> Default storage policy if not configured cannot be "NONE"
> -
>
> Key: HBASE-18118
> URL: https://issues.apache.org/jira/browse/HBASE-18118
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18118.patch
>
>
> HBase can't use 'NONE' as default storage policy if not configured because 
> HDFS supports no such policy. This policy name was probably available in a 
> precommit or early version of the HDFS side support for heterogeneous 
> storage. Now the best default is 'HOT'. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18027:
---
Attachment: HBASE-18027-branch-1.patch
HBASE-18027.patch

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, 
> HBASE-18027-branch-1.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18127) Allow regionobserver to optionally skip postPut/postDelete when postBatchMutate was called

2017-05-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026889#comment-16026889
 ] 

Lars Hofhansl commented on HBASE-18127:
---

OK... Moved to an HBase issue.

> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called
> --
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Moved] (HBASE-18127) Allow regionobserver to optionally skip postPut/postDelete when postBatchMutate was called

2017-05-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl moved PHOENIX-3892 to HBASE-18127:


Issue Type: New Feature  (was: Bug)
   Key: HBASE-18127  (was: PHOENIX-3892)
   Project: HBase  (was: Phoenix)

> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called
> --
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17893) Allow HBase to build against Hadoop 2.8.0

2017-05-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-17893:
--
Attachment: 17893-1.3-backport.txt

This is against branch 1.3. Can be a first step. Maybe we just leave 1.2 and 
earlier.

> Allow HBase to build against Hadoop 2.8.0
> -
>
> Key: HBASE-17893
> URL: https://issues.apache.org/jira/browse/HBASE-17893
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.5
>Reporter: Lars Hofhansl
> Attachments: 17883-1.2-BROKEN.txt, 17893-1.3-backport.txt
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hbase-assembly: Error rendering velocity resource. Error invoking 
> method 'get(java.lang.Integer)' in java.util.ArrayList at 
> META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 
> 0, Size: 0 -> [Help 1]
> {code}
> The in the generated LICENSE.
> {code}
> This product includes Nimbus JOSE+JWT licensed under the The Apache Software 
> License, Version 2.0.
> ${dep.licenses[0].comments}
> Please check  this License for acceptability here:
> https://www.apache.org/legal/resolved
> If it is okay, then update the list named 'non_aggregate_fine' in the 
> LICENSE.vm file.
> If it isn't okay, then revert the change that added the dependency.
> More info on the dependency:
> com.nimbusds
> nimbus-jose-jwt
> 3.9
> maven central search
> g:com.nimbusds AND a:nimbus-jose-jwt AND v:3.9
> project website
> https://bitbucket.org/connect2id/nimbus-jose-jwt
> project source
> https://bitbucket.org/connect2id/nimbus-jose-jwt
> {code}
> Maybe the problem is just that it says: Apache _Software_ License



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18117) Increase resiliency by allowing more parameters for online config change

2017-05-26 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026864#comment-16026864
 ] 

Karan Mehta commented on HBASE-18117:
-

Another potential issue is to ensure that future uses of the online parameter 
will implement the {{ConfigurationObserver}}. I couldn't find any such 
enforcement in the current framework. Could you please confirm?
[~gaurav.menghani]

> Increase resiliency by allowing more parameters for online config change
> 
>
> Key: HBASE-18117
> URL: https://issues.apache.org/jira/browse/HBASE-18117
> Project: HBase
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>
> HBASE-8544 adds the feature to change config online without having a server 
> restart. This JIRA is to work on new parameters for the utilizing that 
> feature.
> As [~apurtell] suggested, following are the useful and frequently changing 
> parameters in production.
> - RPC limits, timeouts, and other performance relevant settings
> - Replication limits and batch sizes
> - Region carrying limit
> - WAL retention and cleaning parameters
> I will try to make the RPC timeout parameter online as a part of this JIRA. 
> If it seems suitable then we can extend it to other params.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17893) Allow HBase to build against Hadoop 2.8.0

2017-05-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026863#comment-16026863
 ] 

Lars Hofhansl commented on HBASE-17893:
---

If now I just backport HBASE-16712, I can actually get it to compile.

> Allow HBase to build against Hadoop 2.8.0
> -
>
> Key: HBASE-17893
> URL: https://issues.apache.org/jira/browse/HBASE-17893
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.5
>Reporter: Lars Hofhansl
> Attachments: 17883-1.2-BROKEN.txt
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hbase-assembly: Error rendering velocity resource. Error invoking 
> method 'get(java.lang.Integer)' in java.util.ArrayList at 
> META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 
> 0, Size: 0 -> [Help 1]
> {code}
> The in the generated LICENSE.
> {code}
> This product includes Nimbus JOSE+JWT licensed under the The Apache Software 
> License, Version 2.0.
> ${dep.licenses[0].comments}
> Please check  this License for acceptability here:
> https://www.apache.org/legal/resolved
> If it is okay, then update the list named 'non_aggregate_fine' in the 
> LICENSE.vm file.
> If it isn't okay, then revert the change that added the dependency.
> More info on the dependency:
> com.nimbusds
> nimbus-jose-jwt
> 3.9
> maven central search
> g:com.nimbusds AND a:nimbus-jose-jwt AND v:3.9
> project website
> https://bitbucket.org/connect2id/nimbus-jose-jwt
> project source
> https://bitbucket.org/connect2id/nimbus-jose-jwt
> {code}
> Maybe the problem is just that it says: Apache _Software_ License



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-18126) Increment class

2017-05-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-18126:
--

Assignee: Ted Yu

> Increment class
> ---
>
> Key: HBASE-18126
> URL: https://issues.apache.org/jira/browse/HBASE-18126
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>
> These Increment objects are used by the Table implementation to perform 
> increment operation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17893) Allow HBase to build against Hadoop 2.8.0

2017-05-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026853#comment-16026853
 ] 

Lars Hofhansl commented on HBASE-17893:
---

Confirmed that LICENSE seems fine now. Failing on the NOTICE file now (exactly 
where you say it does [~busbey])

> Allow HBase to build against Hadoop 2.8.0
> -
>
> Key: HBASE-17893
> URL: https://issues.apache.org/jira/browse/HBASE-17893
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.5
>Reporter: Lars Hofhansl
> Attachments: 17883-1.2-BROKEN.txt
>
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project hbase-assembly: Error rendering velocity resource. Error invoking 
> method 'get(java.lang.Integer)' in java.util.ArrayList at 
> META-INF/LICENSE.vm[line 1671, column 8]: InvocationTargetException: Index: 
> 0, Size: 0 -> [Help 1]
> {code}
> The in the generated LICENSE.
> {code}
> This product includes Nimbus JOSE+JWT licensed under the The Apache Software 
> License, Version 2.0.
> ${dep.licenses[0].comments}
> Please check  this License for acceptability here:
> https://www.apache.org/legal/resolved
> If it is okay, then update the list named 'non_aggregate_fine' in the 
> LICENSE.vm file.
> If it isn't okay, then revert the change that added the dependency.
> More info on the dependency:
> com.nimbusds
> nimbus-jose-jwt
> 3.9
> maven central search
> g:com.nimbusds AND a:nimbus-jose-jwt AND v:3.9
> project website
> https://bitbucket.org/connect2id/nimbus-jose-jwt
> project source
> https://bitbucket.org/connect2id/nimbus-jose-jwt
> {code}
> Maybe the problem is just that it says: Apache _Software_ License



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18126) Increment class

2017-05-26 Thread Ted Yu (JIRA)
Ted Yu created HBASE-18126:
--

 Summary: Increment class
 Key: HBASE-18126
 URL: https://issues.apache.org/jira/browse/HBASE-18126
 Project: HBase
  Issue Type: Sub-task
Reporter: Ted Yu


These Increment objects are used by the Table implementation to perform 
increment operation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-05-26 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026777#comment-16026777
 ] 

Yi Liang commented on HBASE-18090:
--

BTW, I can not apply the attached patch to branch1.3 on my machine. But it 
seems good on this jira, we see the hadoop-qa


> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.4.0
>Reporter: Mikhail Antonov
> Attachments: HBASE-18090-branch-1.3-v1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18090) Improve TableSnapshotInputFormat to allow more multiple mappers per region

2017-05-26 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026772#comment-16026772
 ] 

Yi Liang commented on HBASE-18090:
--

Hi Mikhail,
  I think the patch is good overall, Ted's comments maybe need to be addressed. 
  Just have one question, what if user has implement their own split algo and 
want to use it, and your code seems can not handle those case. 
{code}
+RegionSplitter.SplitAlgorithm splitAlgo = null;
+if 
(RegionSplitter.UniformSplit.class.getSimpleName().equals(conf.get(SPLIT_ALGO)))
 {
+  splitAlgo = new RegionSplitter.UniformSplit();
+} else if 
(RegionSplitter.HexStringSplit.class.getSimpleName().equals(conf.get(SPLIT_ALGO)))
 {
+  splitAlgo = new RegionSplitter.HexStringSplit();
+}
{code}

> Improve TableSnapshotInputFormat to allow more multiple mappers per region
> --
>
> Key: HBASE-18090
> URL: https://issues.apache.org/jira/browse/HBASE-18090
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 1.4.0
>Reporter: Mikhail Antonov
> Attachments: HBASE-18090-branch-1.3-v1.patch
>
>
> TableSnapshotInputFormat runs one map task per region in the table snapshot. 
> This places unnecessary restriction that the region layout of the original 
> table needs to take the processing resources available to MR job into 
> consideration. Allowing to run multiple mappers per region (assuming 
> reasonably even key distribution) would be useful.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-18125) HBase shell disregards spaces at the end of a split key in a split file

2017-05-26 Thread Ashu Pachauri (JIRA)
Ashu Pachauri created HBASE-18125:
-

 Summary: HBase shell disregards spaces at the end of a split key 
in a split file
 Key: HBASE-18125
 URL: https://issues.apache.org/jira/browse/HBASE-18125
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.3.1, 2.0.0
Reporter: Ashu Pachauri


When converting row keys to a printable string representation, Bytes class 
considers SPACE as a printable character, so it prints it out as it is. So, 
it's quite possible that a row key has a space at the end.

When specifying split points in a file, the row keys are not quoted and the 
shell wrapper "admin.rb" strips any whitespace off the row keys:

{code}
 File.foreach(splits_file) do |line|
arg[SPLITS].push(line.strip())
  end
{code}
The correct approach is to use "chomp()" instead of "strip()" to just strip off 
carriage returns and newlines. We should assume that the hbase user is either 
using split points printed out by hbase itself (which will not have tabs) or is 
diligent enough to not use tabs at the end of a split point.
What's worse is that it goes undetected and will result in undesirable split 
points.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026746#comment-16026746
 ] 

Hadoop QA commented on HBASE-18114:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
1s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
59m 14s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 209m 52s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 295m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestPerColumnFamilyFlush |
| Timed out junit tests | 
org.apache.hadoop.hbase.replication.regionserver.TestWALEntryStream |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870080/HBASE-18114-v2.patch |
| JIRA Issue | HBASE-18114 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 6997ad5663be 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8b5c161 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6973/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6973/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6973/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6973/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Update the config of TestAsync*AdminApi to make test 

[jira] [Updated] (HBASE-18078) [C++] Harden RPC by handling various communication abnormalities

2017-05-26 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-18078:
--
Attachment: HBASE-18078.001.patch

v1 resolves #1.

> [C++] Harden RPC by handling various communication abnormalities
> 
>
> Key: HBASE-18078
> URL: https://issues.apache.org/jira/browse/HBASE-18078
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18078.000.patch, HBASE-18078.001.patch
>
>
> RPC layer should handle various communication abnormalities (e.g. connection 
> timeout, server aborted connection, and so on). Ideally, the corresponding 
> exceptions should be raised and propagated through handlers of pipeline in 
> client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18059) The scanner order for memstore scanners are wrong

2017-05-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18059:
--
Priority: Critical  (was: Major)

> The scanner order for memstore scanners are wrong
> -
>
> Key: HBASE-18059
> URL: https://issues.apache.org/jira/browse/HBASE-18059
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan, Scanners
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 2.0.0
>
>
> This is comments for KeyValueScanner.getScannerOrder
> {code:title=KeyValueScanner.java}
>   /**
>* Get the order of this KeyValueScanner. This is only relevant for 
> StoreFileScanners and
>* MemStoreScanners (other scanners simply return 0). This is required for 
> comparing multiple
>* files to find out which one has the latest data. StoreFileScanners are 
> ordered from 0
>* (oldest) to newest in increasing order. MemStoreScanner gets LONG.max 
> since it always
>* contains freshest data.
>*/
>   long getScannerOrder();
> {code}
> As now we may have multiple memstore scanners, I think the right way to 
> select scanner order for memstore scanner is to ordered from Long.MAX_VALUE 
> in decreasing order.
> But in CompactingMemStore and DefaultMemStore, the scanner order for memstore 
> scanner is also start from 0, which will be messed up with StoreFileScanners.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18109) Assign system tables first (priority)

2017-05-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18109:
--
Priority: Critical  (was: Major)

> Assign system tables first (priority)
> -
>
> Key: HBASE-18109
> URL: https://issues.apache.org/jira/browse/HBASE-18109
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> Need this for stuff like the RSGroup table, etc. Assign these ahead of 
> user-space regions.
> From 'Handle sys table assignment first (e.g. acl, namespace, rsgroup); 
> currently only hbase:meta is first.' of 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.oefcyphs0v0x



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18103) [AMv2] If Master gives OPEN to another, if original eventually succeeds, Master will kill it

2017-05-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18103:
--
Priority: Critical  (was: Major)

> [AMv2] If Master gives OPEN to another, if original eventually succeeds, 
> Master will kill it
> 
>
> Key: HBASE-18103
> URL: https://issues.apache.org/jira/browse/HBASE-18103
> Project: HBase
>  Issue Type: Sub-task
>  Components: master, proc-v2
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> If a RS is slow to open a Region, the Master will give the Region to another 
> to open it (In this case, was a massive set of edits to process and a load of 
> StoreFiles to open...). Should the original RS succeed with its open 
> eventually, on reporting the master the successful open, the Master currently 
> kills the RS because the region is supposed to be elsewhere.
> This is an easy fix.
> The RS does not fully open a Region until Master gives it the go so just 
> close the region if master rejects the open
> See '6.1.1 If Master gives Region to another to Open, old RS will be kill 
> itself on reject by Master; easy fix!' in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.qtfojp9774h



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18110) [AMv2] Reenable tests temporarily disable

2017-05-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18110:
--
Priority: Critical  (was: Major)

> [AMv2] Reenable tests temporarily disable
> -
>
> Key: HBASE-18110
> URL: https://issues.apache.org/jira/browse/HBASE-18110
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> We disabled tests that didn't make sense or relied on behavior not supported 
> by AMv2. Revisit and reenable after AMv2 gets committed. Here is the set 
> (from 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.rsj53tx4vlwj)
> testAllFavoredNodesDead and testAllFavoredNodesDeadMasterRestarted and 
> testMisplacedRegions in TestFavoredStochasticLoadBalancer … not sure what 
> this about.
> testRegionNormalizationMergeOnCluster in TestSimpleRegionNormalizerOnCluster 
> disabled for now till we fix up Merge.
> testMergeWithReplicas in TestRegionMergeTransactionOnCluster because don't 
> know how it is supposed to work.
> Admin#close does not update Master. Causes 
> testHBaseFsckWithFewerMetaReplicaZnodes in TestMetaWithReplicas to fail 
> (Master gets report about server closing when it didn’t run the close -- gets 
> freaked out).
> Disabled/Ignore TestRSGroupsOfflineMode#testOffline; need to dig in on what 
> offline is.
> Disabled/Ignore TestRSGroups.
> All tests that have to do w/ fsck:TestHBaseFsckTwoRS, 
> TestOfflineMetaRebuildBase TestHBaseFsckReplicas, 
> TestOfflineMetaRebuildOverlap, testChangingReplicaCount in 
> TestMetaWithReplicas (internally it is doing fscks which are killing RS)...
> FSCK test testHBaseFsckWithExcessMetaReplicas in TestMetaWithReplicas.
> So is testHBaseFsckWithFewerMetaReplicas in same class.
> TestHBaseFsckOneRS is fsck. Disabled.
> TestOfflineMetaRebuildHole is about rebuilding hole with fsck.
> Master carries meta:
> TestRegionRebalancing is disabled because doesn't consider the fact that 
> Master carries system tables only (fix of average in RegionStates brought out 
> the issue).
> Disabled testMetaAddressChange in TestMetaWithReplicas because presumes can 
> move meta... you can't
> TestAsyncTableGetMultiThreaded wants to move hbase:meta...Balancer does NPEs. 
> AMv2 won't let you move hbase:meta off Master.
> Disabled parts of...testCreateTableWithMultipleReplicas in 
> TestMasterOperationsForRegionReplicas There is an issue w/ assigning more 
> replicas if number of replicas is changed on us. See '/* DISABLED! FOR 
> NOW'.
> Disabled TestCorruptedRegionStoreFile. Depends on a half-implemented reopen 
> of a region when a store file goes missing; TODO.
> testRetainAssignmentOnRestart in TestRestartCluster does not work. AMv2 does 
> retain semantic differently. Fix. TODO.
> TestMasterFailover needs to be rewritten for AMv2. It uses tricks not 
> ordained when up on AMv2. The test is also hobbled by fact that we 
> religiously enforce that only master can carry meta, something we are lose 
> about in old AM.
> Fix Ignores in TestServerCrashProcedure. Master is different now.
> Offlining is done differently now: Because of this disabled testOfflineRegion 
> in TestAsyncRegionAdminApi
> Skipping delete of table after test in TestAccessController3 because of 
> access issues w/ AMv2. AMv1 seems to crash servers on exit too for same lack 
> of auth perms but AMv2 gets hung up. TODO. See cleanUp method.
> TestHCM#testMulti and TestHCM
> Fix TestMasterMetrics. Stuff is different now around startup which messes up 
> this test. Disabled two of three tests.
> I tried to fix TestMasterBalanceThrottling but it looks like 
> SimpleLoadBalancer is borked whether AMv2 or not.
> Disabled testPickers in TestFavoredStochasticBalancerPickers. It hangs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18115) Move SaslServer creation to HBaseSaslRpcServer

2017-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026718#comment-16026718
 ] 

stack commented on HBASE-18115:
---

Skimmed. LGTM. We have SASL tests?

> Move SaslServer creation to HBaseSaslRpcServer
> --
>
> Key: HBASE-18115
> URL: https://issues.apache.org/jira/browse/HBASE-18115
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18115.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18027:
---
Attachment: HBASE-18027-branch-1.patch
HBASE-18027.patch

Attaching union of earlier patches and Ashu's suggestion for master and 
branch-1. I'll come back and set this Patch Available if local tests check out.

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027-branch-1.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026693#comment-16026693
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #185 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/185/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
8cbb0411beee890ef9ad3631e91262b7824ba3ea)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026690#comment-16026690
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #142 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/142/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
13efd41188fb722f8ff0bcb9af637c9a4f99c1d3)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18123) Hbase indexer

2017-05-26 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026689#comment-16026689
 ] 

Jerry He commented on HBASE-18123:
--

'myIPaddress:2181/solr' is the place for your Solr instance discovery and other 
info on the Zookeeper.  
Please verify your Solr setup.  Consult your HBase-indexer documentation.
HBase JIRA is to report HBase problem.  You can use the mailing list to ask 
questions.

> Hbase indexer
> -
>
> Key: HBASE-18123
> URL: https://issues.apache.org/jira/browse/HBASE-18123
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.2.3
> Environment: Debian / Hadoop 2.6 / Solr 6.4.2
>Reporter: Fred
>
> Hi all,
> I want to extract fields from PDF files and store it into Hbase. Then I want 
> to link the database with a Solr collection. To do this, I installed 
> Hbase-indexer.
> What is the best way to do it ?
> Actually, I can write data into Hbase but not into my Solr collection. When I 
> launch Hbase-indexer server, I get some errors :
> Cannot connect to cluster at myIPaddress:2181/solr: cluster not found/not 
> ready
> Somebody ti o help me ? Thanks in advance.
> Fred



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-18123) Hbase indexer

2017-05-26 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He resolved HBASE-18123.
--
Resolution: Invalid

> Hbase indexer
> -
>
> Key: HBASE-18123
> URL: https://issues.apache.org/jira/browse/HBASE-18123
> Project: HBase
>  Issue Type: Task
>Affects Versions: 1.2.3
> Environment: Debian / Hadoop 2.6 / Solr 6.4.2
>Reporter: Fred
>
> Hi all,
> I want to extract fields from PDF files and store it into Hbase. Then I want 
> to link the database with a Solr collection. To do this, I installed 
> Hbase-indexer.
> What is the best way to do it ?
> Actually, I can write data into Hbase but not into my Solr collection. When I 
> launch Hbase-indexer server, I get some errors :
> Cannot connect to cluster at myIPaddress:2181/solr: cluster not found/not 
> ready
> Somebody ti o help me ? Thanks in advance.
> Fred



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026674#comment-16026674
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK8 #137 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/137/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
13efd41188fb722f8ff0bcb9af637c9a4f99c1d3)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16261) MultiHFileOutputFormat Enhancement

2017-05-26 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16261:
-
Description: 
Change MultiHFileOutputFormat to MultiTableHFileOutputFormat, Continuing work 
to enhance the MultiTableHFileOutputFormat to make it more usable:

MultiTableHFileOutputFormat follow HFileOutputFormat2
(1) HFileOutputFormat2 can read one table's region split keys. and then output 
multiple hfiles for one family, and each hfile map to one region. We can add 
partitioner in MultiTableHFileOutputFormat to make it support this feature.

(2) HFileOutputFormat2 support Customized Compression algorithm for column 
family and BloomFilter, also support customized DataBlockEncoding for the 
output hfiles. We can also make MultiTableHFileOutputFormat to support these 
features.

  was:
MultiHFileOutputFormat follow HFileOutputFormat2
(1) HFileOutputFormat2 can read one table's region split keys. and then output 
multiple hfiles for one family, and each hfile map to one region. We can add 
partitioner in MultiHFileOutputFormat to make it support this feature.

(2) HFileOutputFormat2 support Customized Compression algorithm for column 
family and BloomFilter, also support customized DataBlockEncoding for the 
output hfiles. We can also make MultiHFileOutputFormat to support these 
features.


>  MultiHFileOutputFormat Enhancement 
> 
>
> Key: HBASE-16261
> URL: https://issues.apache.org/jira/browse/HBASE-16261
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase, mapreduce
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16261-V1.patch, HBASE-16261-V2.patch, 
> HBASE-16261-V3.patch, HBASE-16261-V4.patch, HBASE-16261-V5.patch, 
> HBase-16261-V6.patch, HBase-16261-V7.patch, HBase-16261-V8.patch, 
> HBase-16261-V9.patch
>
>
> Change MultiHFileOutputFormat to MultiTableHFileOutputFormat, Continuing work 
> to enhance the MultiTableHFileOutputFormat to make it more usable:
> MultiTableHFileOutputFormat follow HFileOutputFormat2
> (1) HFileOutputFormat2 can read one table's region split keys. and then 
> output multiple hfiles for one family, and each hfile map to one region. We 
> can add partitioner in MultiTableHFileOutputFormat to make it support this 
> feature.
> (2) HFileOutputFormat2 support Customized Compression algorithm for column 
> family and BloomFilter, also support customized DataBlockEncoding for the 
> output hfiles. We can also make MultiTableHFileOutputFormat to support these 
> features.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16261) MultiHFileOutputFormat Enhancement

2017-05-26 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026665#comment-16026665
 ] 

Yi Liang commented on HBASE-16261:
--

The failed tests seems not related to this patch

>  MultiHFileOutputFormat Enhancement 
> 
>
> Key: HBASE-16261
> URL: https://issues.apache.org/jira/browse/HBASE-16261
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase, mapreduce
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16261-V1.patch, HBASE-16261-V2.patch, 
> HBASE-16261-V3.patch, HBASE-16261-V4.patch, HBASE-16261-V5.patch, 
> HBase-16261-V6.patch, HBase-16261-V7.patch, HBase-16261-V8.patch, 
> HBase-16261-V9.patch
>
>
> MultiHFileOutputFormat follow HFileOutputFormat2
> (1) HFileOutputFormat2 can read one table's region split keys. and then 
> output multiple hfiles for one family, and each hfile map to one region. We 
> can add partitioner in MultiHFileOutputFormat to make it support this feature.
> (2) HFileOutputFormat2 support Customized Compression algorithm for column 
> family and BloomFilter, also support customized DataBlockEncoding for the 
> output hfiles. We can also make MultiHFileOutputFormat to support these 
> features.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026641#comment-16026641
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #171 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/171/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
8cbb0411beee890ef9ad3631e91262b7824ba3ea)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026622#comment-16026622
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK7 #1875 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1875/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
5cec0faeca1064b0538e25d820cf42ba655b86dc)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026615#comment-16026615
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1958 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1958/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
5cec0faeca1064b0538e25d820cf42ba655b86dc)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18118) Default storage policy if not configured cannot be "NONE"

2017-05-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-18118:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the reviews. Pushed to master

> Default storage policy if not configured cannot be "NONE"
> -
>
> Key: HBASE-18118
> URL: https://issues.apache.org/jira/browse/HBASE-18118
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18118.patch
>
>
> HBase can't use 'NONE' as default storage policy if not configured because 
> HDFS supports no such policy. This policy name was probably available in a 
> precommit or early version of the HDFS side support for heterogeneous 
> storage. Now the best default is 'HOT'. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026550#comment-16026550
 ] 

Andrew Purtell commented on HBASE-18027:


I like [~ashu210890]'s suggestion, which is similar but simpler than my initial 
attempt. Let me work something up and come back here with it. 

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026547#comment-16026547
 ] 

Lars Hofhansl commented on HBASE-18027:
---

[~apurtell], [~THEcreationist], let's go with the initial approach then. Sorry 
for sending you on this detour!

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026548#comment-16026548
 ] 

Hudson commented on HBASE-16011:


FAILURE: Integrated in Jenkins build HBase-1.4 #750 (See 
[https://builds.apache.org/job/HBase-1.4/750/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
d8c1e0e004e3aa838d599be7c7e30f3736b02ef8)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18117) Increase resiliency by allowing more parameters for online config change

2017-05-26 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026518#comment-16026518
 ] 

Karan Mehta commented on HBASE-18117:
-

{{ConfigurationManager}} manages all the observers and is meant to be a 
singleton class, which is initialized inside the {{RSRpcServices}}. However, it 
is declared as a package protected and hence it is difficult to make it useful 
for other parameters which are being used by classes from different packages. A 
better approach is to move this framework from {{hbase-server}} to 
{{hbase-common}}. How does this approach seem? This framework can follow 
singleton design pattern as well if required.

> Increase resiliency by allowing more parameters for online config change
> 
>
> Key: HBASE-18117
> URL: https://issues.apache.org/jira/browse/HBASE-18117
> Project: HBase
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>
> HBASE-8544 adds the feature to change config online without having a server 
> restart. This JIRA is to work on new parameters for the utilizing that 
> feature.
> As [~apurtell] suggested, following are the useful and frequently changing 
> parameters in production.
> - RPC limits, timeouts, and other performance relevant settings
> - Replication limits and batch sizes
> - Region carrying limit
> - WAL retention and cleaning parameters
> I will try to make the RPC timeout parameter online as a part of this JIRA. 
> If it seems suitable then we can extend it to other params.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026474#comment-16026474
 ] 

Andrew Purtell edited comment on HBASE-18027 at 5/26/17 4:46 PM:
-

[~lhofhansl] as you can see from the current branch-1 patch I think the initial 
approach was better. It can be applied to both master and branch-1 code. I 
found handling this higher up in the caller(s) as you suggested can work in 
master where the WAL reader has a stream abstraction but not in earlier code. 


was (Author: apurtell):
[~lhofhansl] as you can see from the current branch-1 patch I think the initial 
approach was better. It would work for both master and branch-1. 

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16261) MultiHFileOutputFormat Enhancement

2017-05-26 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16261:
-
Status: Patch Available  (was: Open)

>  MultiHFileOutputFormat Enhancement 
> 
>
> Key: HBASE-16261
> URL: https://issues.apache.org/jira/browse/HBASE-16261
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase, mapreduce
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16261-V1.patch, HBASE-16261-V2.patch, 
> HBASE-16261-V3.patch, HBASE-16261-V4.patch, HBASE-16261-V5.patch, 
> HBase-16261-V6.patch, HBase-16261-V7.patch, HBase-16261-V8.patch, 
> HBase-16261-V9.patch
>
>
> MultiHFileOutputFormat follow HFileOutputFormat2
> (1) HFileOutputFormat2 can read one table's region split keys. and then 
> output multiple hfiles for one family, and each hfile map to one region. We 
> can add partitioner in MultiHFileOutputFormat to make it support this feature.
> (2) HFileOutputFormat2 support Customized Compression algorithm for column 
> family and BloomFilter, also support customized DataBlockEncoding for the 
> output hfiles. We can also make MultiHFileOutputFormat to support these 
> features.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16261) MultiHFileOutputFormat Enhancement

2017-05-26 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16261:
-
Status: Open  (was: Patch Available)

>  MultiHFileOutputFormat Enhancement 
> 
>
> Key: HBASE-16261
> URL: https://issues.apache.org/jira/browse/HBASE-16261
> Project: HBase
>  Issue Type: Sub-task
>  Components: hbase, mapreduce
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16261-V1.patch, HBASE-16261-V2.patch, 
> HBASE-16261-V3.patch, HBASE-16261-V4.patch, HBASE-16261-V5.patch, 
> HBase-16261-V6.patch, HBase-16261-V7.patch, HBase-16261-V8.patch, 
> HBase-16261-V9.patch
>
>
> MultiHFileOutputFormat follow HFileOutputFormat2
> (1) HFileOutputFormat2 can read one table's region split keys. and then 
> output multiple hfiles for one family, and each hfile map to one region. We 
> can add partitioner in MultiHFileOutputFormat to make it support this feature.
> (2) HFileOutputFormat2 support Customized Compression algorithm for column 
> family and BloomFilter, also support customized DataBlockEncoding for the 
> output hfiles. We can also make MultiHFileOutputFormat to support these 
> features.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026467#comment-16026467
 ] 

Andrew Purtell edited comment on HBASE-18027 at 5/26/17 4:43 PM:
-

[~ashu210890] If you look at earlier revisions of the patch that's the initial 
approach I took as well. Where actually building the replication RPCs in 
Replicator we'd watch for overage and split the entry queue into multiple RPCs 
as needed.  I switched approach in an attempt to address feedback from 
[~lhofhansl] who wanted to avoid complicating the endpoint. 


was (Author: apurtell):
[~ashu210890] If you look at earlier revisions of the patch that's the initial 
approach I took as well. Where actually building the replication RPCs we'd 
watch for overage and split the entry queue into multiple RPCs as needed.  I 
switched approach I attempt to address feedback from [~lhofhansl] who wanted to 
avoid complicating the endpoint. 

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026475#comment-16026475
 ] 

Hadoop QA commented on HBASE-18114:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 5s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 109m 12s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.13.1 Server=1.13.1 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870072/HBASE-18114-v2.patch |
| JIRA Issue | HBASE-18114 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 9682a80e1a2f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8b5c161 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6972/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6972/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, 

[jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026474#comment-16026474
 ] 

Andrew Purtell commented on HBASE-18027:


[~lhofhansl] as you can see from the current branch-1 patch I think the initial 
approach was better. It would work for both master and branch-1. 

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18027) Replication should respect RPC size limits when batching edits

2017-05-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026467#comment-16026467
 ] 

Andrew Purtell commented on HBASE-18027:


[~ashu210890] If you look at earlier revisions of the patch that's the initial 
approach I took as well. Where actually building the replication RPCs we'd 
watch for overage and split the entry queue into multiple RPCs as needed.  I 
switched approach I attempt to address feedback from [~lhofhansl] who wanted to 
avoid complicating the endpoint. 

> Replication should respect RPC size limits when batching edits
> --
>
> Key: HBASE-18027
> URL: https://issues.apache.org/jira/browse/HBASE-18027
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18027-branch-1.patch, HBASE-18027.patch, 
> HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch, HBASE-18027.patch
>
>
> In HBaseInterClusterReplicationEndpoint#replicate we try to replicate in 
> batches. We create N lists. N is the minimum of configured replicator 
> threads, number of 100-waledit batches, or number of current sinks. Every 
> pending entry in the replication context is then placed in order by hash of 
> encoded region name into one of these N lists. Each of the N lists is then 
> sent all at once in one replication RPC. We do not test if the sum of data in 
> each N list will exceed RPC size limits. This code presumes each individual 
> edit is reasonably small. Not checking for aggregate size while assembling 
> the lists into RPCs is an oversight and can lead to replication failure when 
> that assumption is violated.
> We can fix this by generating as many replication RPC calls as we need to 
> drain a list, keeping each RPC under limit, instead of assuming the whole 
> list will fit in one.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15903) Delete Object

2017-05-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15903:
---
Attachment: 15903.v4.txt

Patch v4 fixes a bug in v2.
Also adds more AddXX methods.

> Delete Object
> -
>
> Key: HBASE-15903
> URL: https://issues.apache.org/jira/browse/HBASE-15903
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Ted Yu
> Attachments: 15903.v2.txt, 15903.v4.txt, 
> HBASE-15903.HBASE-14850.v1.patch
>
>
> Patch for creating Delete objects. These Delete objects are used by the Table 
> implementation to delete rowkey from a table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18120) Fix TestAsyncRegionAdminApi

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026458#comment-16026458
 ] 

Hudson commented on HBASE-18120:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3080 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3080/])
HBASE-18120 (addendum) Fix TestAsyncRegionAdminApi (zghao: rev 
b076b8e794d19c9d552cff6c14100b1fda0cf520)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java


> Fix TestAsyncRegionAdminApi
> ---
>
> Key: HBASE-18120
> URL: https://issues.apache.org/jira/browse/HBASE-18120
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-18003.v2.patch, HBASE-18120.addendum.patch
>
>
> This test fails for me locally. The patch from HBASE-18003 by [~openinx] 
> fixes my failing test so stealing it from that issue and committing here 
> (stealing it because the boys are working on TestAsyncTableAdminApi too over 
> in HBASE-18003). Thanks for the patch [~openinx]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17777) TestMemstoreLAB#testLABThreading runs too long for a small test

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026459#comment-16026459
 ] 

Hudson commented on HBASE-1:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3080 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3080/])
HBASE-1 TestMemstoreLAB#testLABThreading runs too long for a small 
(ramkrishna: rev 8b5c161cbf0cab4eb250827e20a12acee00b400d)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java


> TestMemstoreLAB#testLABThreading runs too long for a small test
> ---
>
> Key: HBASE-1
> URL: https://issues.apache.org/jira/browse/HBASE-1
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-1_1.patch, HBASE-1_2.patch
>
>
> While working on ChunkCreator/ChunkMap found that the test in 
> TestMSLAB#testLABThreading() runs for almost 5 mins and the whole test is 
> under smallTest category.
> The reason is that we are creating 35*2MB chunks from MSLAB. We try writing 
> data to these chunks until they are 50MB in size.
> And while verifying in order to check if the chunks are not 
> overwritten/overlapped we verify the content of the buffers.
> So we actually keep comparing 50MB buffer n  number of times. I suggest we 
> change this in a way that at max we create chunks whose size is totally at 
> 1MB or may be even lesser and write cells which are smaller in size. By doing 
> this we can drastically reduce the run time of this test. May be something 
> less than 1 min.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18120) Fix TestAsyncRegionAdminApi

2017-05-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026450#comment-16026450
 ] 

stack commented on HBASE-18120:
---

Thanks for cleaning up my mess lads [~openinx] and [~zghaobac] (It worked for 
me over on my branch -- smile!)

> Fix TestAsyncRegionAdminApi
> ---
>
> Key: HBASE-18120
> URL: https://issues.apache.org/jira/browse/HBASE-18120
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-18003.v2.patch, HBASE-18120.addendum.patch
>
>
> This test fails for me locally. The patch from HBASE-18003 by [~openinx] 
> fixes my failing test so stealing it from that issue and committing here 
> (stealing it because the boys are working on TestAsyncTableAdminApi too over 
> in HBASE-18003). Thanks for the patch [~openinx]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026436#comment-16026436
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #875 (See 
[https://builds.apache.org/job/HBase-1.2-IT/875/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
13efd41188fb722f8ff0bcb9af637c9a4f99c1d3)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: HBASE-14614.master.045.patch

Retry. Looks like all tests passed; two needed retries.

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.017.patch, 
> HBASE-14614.master.018.patch, HBASE-14614.master.019.patch, 
> HBASE-14614.master.020.patch, HBASE-14614.master.022.patch, 
> HBASE-14614.master.023.patch, HBASE-14614.master.024.patch, 
> HBASE-14614.master.025.patch, HBASE-14614.master.026.patch, 
> HBASE-14614.master.027.patch, HBASE-14614.master.028.patch, 
> HBASE-14614.master.029.patch, HBASE-14614.master.030.patch, 
> HBASE-14614.master.033.patch, HBASE-14614.master.038.patch, 
> HBASE-14614.master.039.patch, HBASE-14614.master.040.patch, 
> HBASE-14614.master.041.patch, HBASE-14614.master.042.patch, 
> HBASE-14614.master.043.patch, HBASE-14614.master.044.patch, 
> HBASE-14614.master.045.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-14614) Procedure v2: Core Assignment Manager

2017-05-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14614:
--
Attachment: (was: HBASE-14614.master.045.patch)

> Procedure v2: Core Assignment Manager
> -
>
> Key: HBASE-14614
> URL: https://issues.apache.org/jira/browse/HBASE-14614
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Matteo Bertozzi
> Fix For: 2.0.0
>
> Attachments: HBASE-14614.master.003.patch, 
> HBASE-14614.master.004.patch, HBASE-14614.master.005.patch, 
> HBASE-14614.master.006.patch, HBASE-14614.master.007.patch, 
> HBASE-14614.master.008.patch, HBASE-14614.master.009.patch, 
> HBASE-14614.master.010.patch, HBASE-14614.master.012.patch, 
> HBASE-14614.master.013.patch, HBASE-14614.master.014.patch, 
> HBASE-14614.master.015.patch, HBASE-14614.master.017.patch, 
> HBASE-14614.master.018.patch, HBASE-14614.master.019.patch, 
> HBASE-14614.master.020.patch, HBASE-14614.master.022.patch, 
> HBASE-14614.master.023.patch, HBASE-14614.master.024.patch, 
> HBASE-14614.master.025.patch, HBASE-14614.master.026.patch, 
> HBASE-14614.master.027.patch, HBASE-14614.master.028.patch, 
> HBASE-14614.master.029.patch, HBASE-14614.master.030.patch, 
> HBASE-14614.master.033.patch, HBASE-14614.master.038.patch, 
> HBASE-14614.master.039.patch, HBASE-14614.master.040.patch, 
> HBASE-14614.master.041.patch, HBASE-14614.master.042.patch, 
> HBASE-14614.master.043.patch, HBASE-14614.master.044.patch
>
>
> New AssignmentManager implemented using proc-v2.
>  - AssignProcedure handle assignment operation
>  - UnassignProcedure handle unassign operation
>  - MoveRegionProcedure handle move/balance operation
> Concurrent Assign operations are batched together and sent to the balancer
> Concurrent Assign and Unassign operation ready to be sent to the RS are 
> batched together
> This patch is an intermediate state where we add the new AM as 
> AssignmentManager2() to the master, to be reached by tests. but the new AM 
> will not be integrated with the rest of the system. Only new am unit-tests 
> will exercise the new assigment manager. The integration with the master code 
> is part of HBASE-14616



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026430#comment-16026430
 ] 

Hudson commented on HBASE-16011:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #52 (See 
[https://builds.apache.org/job/HBase-1.3-IT/52/])
HBASE-16011 TableSnapshotScanner and TableSnapshotInputFormat can (tedyu: rev 
8cbb0411beee890ef9ad3631e91262b7824ba3ea)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSnapshotInputFormatImpl.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java


> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18114:
---
Attachment: HBASE-18114-v2.patch

> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, HBASE-18114-v1.patch, 
> HBASE-18114-v1.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch
>
>
> {code}
> 2017-05-25 17:56:34,967 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> master.HMaster$11(2297): Client=hao//127.0.0.1 disable testModifyColumnFamily
> 2017-05-25 17:56:37,974 INFO  [RpcClient-timer-pool1-t1] 
> client.AsyncHBaseAdmin$TableProcedureBiConsumer(2219): Operation: DISABLE, 
> Table Name: default:testModifyColumnFamily failed with Failed after 
> attempts=3, exceptions: 
> Thu May 25 17:56:35 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=294, waitTime=1008, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=295, waitTime=1299, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=296, waitTime=668, 
> rpcTimeout=660
> 017-05-25 17:56:38,936 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> procedure2.ProcedureExecutor(788): Stored procId=15, owner=hao, 
> state=RUNNABLE:DISABLE_TABLE_PREPARE, DisableTableProcedure 
> table=testModifyColumnFamily
> {code}
> For this disable table procedure, master return the procedure id when it 
> submit the procedure to ProcedureExecutor. And the above procedure take 4 
> seconds to submit. So the disable table call failed because the rpc timeout 
> is 1 seconds and the retry number is 3.
> For admin operation, I thought we don't need change the default timeout 
> config in unit test. And the retry is not need, too. (Or we can set a retry > 
> 1 to test nonce thing). Meanwhile, the default timeout is 60 seconds. So the 
> test type may need change to LargeTests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16011) TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows

2017-05-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16011:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.11
   1.3.2
   1.2.6
   1.4.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Zheng.

> TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
> 
>
> Key: HBASE-16011
> URL: https://issues.apache.org/jira/browse/HBASE-16011
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Affects Versions: 2.0.0, 1.2.2
>Reporter: Youngjoon Kim
>Assignee: Zheng Hu
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-16011.branch-1.1.v1.patch, 
> HBASE-16011.branch-1.2.v1.patch, HBASE-16011.branch-1.v1.patch, 
> HBASE-16011.v1.patch, HBASE-16011.v2.patch, HBASE-16011.v2.patch, 
> snapshot_bug_test.patch
>
>
> A snapshot of (non-pre) split table can include both a parent region and 
> daughter regions. If run TableSnapshotScanner or TableSnapshotInputFormat on 
> the such snapshot, duplicate rows are produced.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18066) Get with closest_row_before on "hbase:meta" can return empty Cell during region merge/split

2017-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026324#comment-16026324
 ] 

Sean Busbey commented on HBASE-18066:
-

I probably won't have time to chase down docker problems until next week at the 
earliest. Probably best to send a note to dev@hbase looking for help. 
[~Apache9] ran into the same problem on another jira.

> Get with closest_row_before on "hbase:meta" can return empty Cell during 
> region merge/split
> ---
>
> Key: HBASE-18066
> URL: https://issues.apache.org/jira/browse/HBASE-18066
> Project: HBase
>  Issue Type: Bug
>  Components: hbase, regionserver
>Affects Versions: 1.3.1
> Environment: Linux (16.04.2), MacOS 10.11.6.
> Standalone and distributed HBase setup.
>Reporter: Andrey Elenskiy
>Assignee: Zheng Hu
> Attachments: HBASE-18066.branch-1.1.v1.patch, 
> HBASE-18066.branch-1.1.v1.patch, HBASE-18066.branch-1.1.v1.patch, 
> HBASE-18066.branch-1.3.v1.patch, HBASE-18066.branch-1.3.v1.patch, 
> HBASE-18066.branch-1.v1.patch, HBASE-18066.branch-1.v2.patch, 
> HBASE-18066.branch-1.v3.patch, TestGetWithClosestRowBeforeWhenSplit.java
>
>
> During region split/merge there's a brief period of time where doing a "Get" 
> with "closest_row_before=true" on "hbase:meta" may return empty 
> "GetResponse.result.cell" field even though parent, splitA and splitB regions 
> are all in "hbase:meta". Both gohbase (https://github.com/tsuna/gohbase) and 
> AsyncHBase (https://github.com/OpenTSDB/asynchbase) interprets this as 
> "TableDoesNotExist", which is returned to the client.
> Here's a gist that reproduces this problem: 
> https://gist.github.com/Timoha/c7a236b768be9220e85e53e1ca53bf96. Note that 
> you have to use older HTable client (I used 1.2.4) as current versions ignore 
> `Get.setClosestRowBefore(bool)` option.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18042) Client Compatibility breaks between versions 1.2 and 1.3

2017-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026322#comment-16026322
 ] 

Sean Busbey commented on HBASE-18042:
-

I probably won't have time to chase down docker problems until next week at the 
earliest. Probably best to send a note to dev@hbase looking for help.

> Client Compatibility breaks between versions 1.2 and 1.3
> 
>
> Key: HBASE-18042
> URL: https://issues.apache.org/jira/browse/HBASE-18042
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Affects Versions: 2.0.0, 1.4.0, 1.3.1
>Reporter: Karan Mehta
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2
>
> Attachments: HBASE-18042-branch-1.3.patch, 
> HBASE-18042-branch-1.3-v1.patch, HBASE-18042-branch-1.patch, 
> HBASE-18042-branch-1.patch, HBASE-18042-branch-1-v1.patch, 
> HBASE-18042-branch-1-v1.patch, HBASE-18042.patch, HBASE-18042-v1.patch, 
> HBASE-18042-v2.patch
>
>
> OpenTSDB uses AsyncHBase as its client, rather than using the traditional 
> HBase Client. From version 1.2 to 1.3, the {{ClientProtos}} have been 
> changed. Newer fields are added to {{ScanResponse}} proto.
> For a typical Scan request in 1.2, would require caller to make an 
> OpenScanner Request, GetNextRows Request and a CloseScanner Request, based on 
> {{more_rows}} boolean field in the {{ScanResponse}} proto.
> However, from 1.3, new parameter {{more_results_in_region}} was added, which 
> limits the results per region. Therefore the client has to now manage sending 
> all the requests for each region. Further more, if the results are exhausted 
> from a particular region, the {{ScanResponse}} will set 
> {{more_results_in_region}} to false, but {{more_results}} can still be true. 
> Whenever the former is set to false, the {{RegionScanner}} will also be 
> closed. 
> OpenTSDB makes an OpenScanner Request and receives all its results in the 
> first {{ScanResponse}} itself, thus creating a condition as described in 
> above paragraph. Since {{more_rows}} is true, it will proceed to send next 
> request at which point the {{RSRpcServices}} will throw 
> {{UnknownScannerException}}. The protobuf client compatibility is maintained 
> but expected behavior is modified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18118) Default storage policy if not configured cannot be "NONE"

2017-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026312#comment-16026312
 ] 

Sean Busbey commented on HBASE-18118:
-

Doesn't this mean our test in {{TestFSUtils.testSetStoragePolicyDefault}} isn't 
doing its job? Is there some way we can update it to show this failure?

> Default storage policy if not configured cannot be "NONE"
> -
>
> Key: HBASE-18118
> URL: https://issues.apache.org/jira/browse/HBASE-18118
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18118.patch
>
>
> HBase can't use 'NONE' as default storage policy if not configured because 
> HDFS supports no such policy. This policy name was probably available in a 
> precommit or early version of the HDFS side support for heterogeneous 
> storage. Now the best default is 'HOT'. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026298#comment-16026298
 ] 

Hadoop QA commented on HBASE-18114:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 15s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 114m 43s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 163m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 |
|   | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer |
|   | org.apache.hadoop.hbase.master.snapshot.TestSnapshotFileCache |
|   | org.apache.hadoop.hbase.regionserver.TestHRegion |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870043/HBASE-18114-v2.patch |
| JIRA Issue | HBASE-18114 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a2eb444b41b9 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b076b8e |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6971/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6971/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6971/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Updated] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18114:
---
Attachment: HBASE-18114-v2.patch

> Update the config of TestAsync*AdminApi to make test stable
> ---
>
> Key: HBASE-18114
> URL: https://issues.apache.org/jira/browse/HBASE-18114
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-18114-v1.patch, HBASE-18114-v1.patch, 
> HBASE-18114-v1.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch, 
> HBASE-18114-v2.patch, HBASE-18114-v2.patch, HBASE-18114-v2.patch
>
>
> {code}
> 2017-05-25 17:56:34,967 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> master.HMaster$11(2297): Client=hao//127.0.0.1 disable testModifyColumnFamily
> 2017-05-25 17:56:37,974 INFO  [RpcClient-timer-pool1-t1] 
> client.AsyncHBaseAdmin$TableProcedureBiConsumer(2219): Operation: DISABLE, 
> Table Name: default:testModifyColumnFamily failed with Failed after 
> attempts=3, exceptions: 
> Thu May 25 17:56:35 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=294, waitTime=1008, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=295, waitTime=1299, 
> rpcTimeout=1000
> Thu May 25 17:56:37 CST 2017, , java.io.IOException: Call to 
> localhost/127.0.0.1:50801 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=296, waitTime=668, 
> rpcTimeout=660
> 017-05-25 17:56:38,936 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=50801] 
> procedure2.ProcedureExecutor(788): Stored procId=15, owner=hao, 
> state=RUNNABLE:DISABLE_TABLE_PREPARE, DisableTableProcedure 
> table=testModifyColumnFamily
> {code}
> For this disable table procedure, master return the procedure id when it 
> submit the procedure to ProcedureExecutor. And the above procedure take 4 
> seconds to submit. So the disable table call failed because the rpc timeout 
> is 1 seconds and the retry number is 3.
> For admin operation, I thought we don't need change the default timeout 
> config in unit test. And the retry is not need, too. (Or we can set a retry > 
> 1 to test nonce thing). Meanwhile, the default timeout is 60 seconds. So the 
> test type may need change to LargeTests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-18115) Move SaslServer creation to HBaseSaslRpcServer

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026285#comment-16026285
 ] 

Hadoop QA commented on HBASE-18115:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 58s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
33m 52s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 33s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 119m 31s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
41s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 177m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor |
|   | org.apache.hadoop.hbase.mapred.TestMultiTableSnapshotInputFormat |
|   | org.apache.hadoop.hbase.replication.TestMasterReplication |
|   | org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870039/HBASE-18115.patch |
| JIRA Issue | HBASE-18115 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux d8f3fb2f244b 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b076b8e |
| 

[jira] [Commented] (HBASE-18114) Update the config of TestAsync*AdminApi to make test stable

2017-05-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026275#comment-16026275
 ] 

Hadoop QA commented on HBASE-18114:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 45s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 48s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 118m 19s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 180m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.quotas.TestRegionSizeUse |
|   | org.apache.hadoop.hbase.quotas.TestQuotaObserverChoreWithMiniCluster |
|   | org.apache.hadoop.hbase.snapshot.TestExportSnapshot |
|   | org.apache.hadoop.hbase.quotas.TestQuotaObserverChoreRegionReports |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12870036/HBASE-18114-v2.patch |
| JIRA Issue | HBASE-18114 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2c08123e9a8f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b076b8e |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6969/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6969/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6969/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6969/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HBASE-18124) Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer

2017-05-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16026233#comment-16026233
 ] 

Ted Yu commented on HBASE-18124:


Have you looked at HBASE-12954 ?

> Add Property name Of Strcut ServerName To Locate HMaster Or HRegionServer
> -
>
> Key: HBASE-18124
> URL: https://issues.apache.org/jira/browse/HBASE-18124
> Project: HBase
>  Issue Type: New Feature
>  Components: Client, hbase, master
>Reporter: liubangchen
>Assignee: liubangchen
> Attachments: 1.jpg, HBASE-18124.patch, HBASE-18124.pdf
>
>
> Hbase only have one way to locate hmaster or hregionserver not like hdfs has 
> two way to locate datanode use by name or hostname.
> I’m a engineer of tencent cloud computing , and I’m in charge of to make 
> Hbase as a cloud service,when we make hbase as a cloud service we need  hbase 
> support other way to support locate hmaster or hregionserver
> Tencent Hbase cloud service architectue shown as follows 1.jpg
> 1.VM
> User’s Hbase client work in vm and use virtual ip address to access hbase 
> cluster.
> 2.NAT
>Network Address Translation, vip(Virtual Network Address) to pip (Physical 
> Network Address)
> 3. HbaseCluster Service
>  HbaseCluster Service work in physical network
> Problem
> 1.  View on vm
>   On vm side vm use vip to communication,but hbase have only one way 
> to communication use struct named
>   ServerName. When Hmaster startup will store master address and meta 
> region server address in zookeeper, 
>then the address is pip(Physical Network Address)   because hbase 
> cluster work in physical network . when vm 
>   get the address from zookeeper will not work because   vm use vip to 
> communication,one way to  solve this is to 
>   make physical machine host as vip like 192.168.0.1,but is not better to 
> make this.
> 2.  View on Physical machine
> Physical machine use pip to communication
> Solution
> 1.   protocol extend change proto message to below:
>   {code}
>   message ServerName {
>   required string host_name = 1;
>  optional uint32 port = 2;
>  optional uint64 start_code = 3;
>   optional string name=4;
>  }
>   {code}
>  add a filed named name like hdfs’s datablock location
> 2.   metatable extend 
>add column to hbase:meta named info:namelocation
> 3.   hbase-server
>   add params 
>  {code}
>   hbase.regionserver.servername
>   
> hbase.regionserver.servername
> 10.0.1.1
>  
>   {code}
>   to regionserver namelocation
>   add params
>  {code}
>hbase.master.servername 
>
>hbase.master.servername
>10.0.1.2
>
>  {code}
>to set master namelocation
> 4.   hbase-client
>   add params 
> {code}
>  hbase.client.use.hostname 
>  
>  hbase.client.use.hostname
>  true
>  
> {code}
>  to choose which address to use
> This patch is base on Hbase-1.3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >