[jira] [Commented] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086906#comment-16086906
 ] 

Jingcheng Du commented on HBASE-18379:
--

I am okay with it. So which branch we want to against in this JIRA? Just master 
branch?

> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot
> 
>
> Key: HBASE-18379
> URL: https://issues.apache.org/jira/browse/HBASE-18379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following was observed by a customer which prevented master from coming 
> up:
> {code}
> 2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
> Failed to become active master
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: Daily_Snapshot_Apps_2017-xx
> at org.apache.hadoop.fs.Path.initialize(Path.java:205)
> at org.apache.hadoop.fs.Path.(Path.java:171)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
> at 
> org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> Daily_Snapshot_Apps_2017-xx
> at java.net.URI.checkPath(URI.java:1823)
> at java.net.URI.(URI.java:745)
> at org.apache.hadoop.fs.Path.initialize(Path.java:202)
> {code}
> Turns out the exception can be reproduced using hdfs command line accessing 
> .snapshot directory.
> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18334) Remove sync client implementation and wrap async client under sync client interface

2017-07-13 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086899#comment-16086899
 ] 

Chia-Ping Tsai commented on HBASE-18334:


+1000 for simplifying the code base with same performance.
Where to see the performance test?

> Remove sync client implementation and wrap async client under sync client 
> interface
> ---
>
> Key: HBASE-18334
> URL: https://issues.apache.org/jira/browse/HBASE-18334
> Project: HBase
>  Issue Type: Task
>Reporter: Phil Yang
> Fix For: 3.0.0
>
>
> Since 2.0 we have async client, now we have two client implementations. We 
> can implement an sync client (Table interface) by using async client, getting 
> a CompletableFuture and then waiting it done directly. This can reduce the 
> maintenance work at client side in the future.
> Async client is almost done, we tested the performance and it showed it has 
> same performance with sync client. In branch-2 we can keep old sync client 
> implementations and remove it in master branch (since 3.0).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18142) Deletion of a cell deletes the previous versions too

2017-07-13 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-18142:
---
Labels: beginner  (was: )

> Deletion of a cell deletes the previous versions too
> 
>
> Key: HBASE-18142
> URL: https://issues.apache.org/jira/browse/HBASE-18142
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Reporter: Karthick
>  Labels: beginner
>
> When I tried to delete a cell using it's timestamp in the Hbase Shell, the 
> previous versions of the same cell also got deleted. But when I tried the 
> same using the Java API, then the previous versions are not deleted and I can 
> retrive the previous values.
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java
> see this file to fix the issue. This method (public Delete addColumns(final 
> byte [] family, final byte [] qualifier, final long timestamp)) only deletes 
> the current version of the cell. The previous versions are not deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18380) Implement async RSGroup admin client based on the async admin

2017-07-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18380:
---
Summary: Implement async RSGroup admin client based on the async admin  
(was: Implement async RSGroup admin based on the async admin)

> Implement async RSGroup admin client based on the async admin
> -
>
> Key: HBASE-18380
> URL: https://issues.apache.org/jira/browse/HBASE-18380
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>
> Now the RSGroup admin client get a blocking stub based on the blocking 
> admin's coprocessor service. As we add coprocessor service support in async 
> admin. So we can implement a new async RSGroup admin client based on the new 
> async admin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18380) Implement async RSGroup admin based on the async admin

2017-07-13 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-18380:
--

 Summary: Implement async RSGroup admin based on the async admin
 Key: HBASE-18380
 URL: https://issues.apache.org/jira/browse/HBASE-18380
 Project: HBase
  Issue Type: Sub-task
Reporter: Guanghao Zhang


Now the RSGroup admin client get a blocking stub based on the blocking admin's 
coprocessor service. As we add coprocessor service support in async admin. So 
we can implement a new async RSGroup admin client based on the new async admin.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17359) Implement async admin

2017-07-13 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086883#comment-16086883
 ] 

Guanghao Zhang commented on HBASE-17359:


Now all methods in Admin are implemented in AsyncAdmin. So the only block thing 
is the document. [~Apache9] Can you take a look about the new doc in 
HBASE-18052? Thanks.

> Implement async admin
> -
>
> Key: HBASE-17359
> URL: https://issues.apache.org/jira/browse/HBASE-17359
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>  Labels: asynchronous
> Fix For: 2.0.0
>
>
> And as we will return a CompletableFuture, I think we can just remove the 
> XXXAsync methods, and make all the methods blocking which means we will only 
> finish the CompletableFuture when the operation is done. User can choose 
> whether to wait on the returned CompletableFuture.
> Convert this to a umbrella task. There maybe some sub-tasks.
> 1. Table admin operations.
> 2. Region admin operations.
> 3. Namespace admin operations.
> 4. Snapshot admin operations.
> 5. Replication admin operations.
> 6. Other operations, like quota, balance..



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-07-13 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086852#comment-16086852
 ] 

Chia-Ping Tsai commented on HBASE-17678:


Got it. Thanks for the explanations. [~busbey]

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.v1.patch, HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, 
> HBASE-17678.v3.patch, HBASE-17678.v4.patch, HBASE-17678.v4.patch, 
> HBASE-17678.v5.patch, HBASE-17678.v6.patch, HBASE-17678.v7.patch, 
> HBASE-17678.v7.patch, TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 

[jira] [Updated] (HBASE-18061) [C++] Fix retry logic in multi-get calls

2017-07-13 Thread Sudeep Sunthankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudeep Sunthankar updated HBASE-18061:
--
Status: Open  (was: Patch Available)

> [C++] Fix retry logic in multi-get calls
> 
>
> Key: HBASE-18061
> URL: https://issues.apache.org/jira/browse/HBASE-18061
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Sudeep Sunthankar
> Fix For: HBASE-14850
>
> Attachments: HBASE-18061.HBASE-14850.v1.patch, 
> HBASE-18061.HBASE-14850.v3.patch, HBASE-18061.HBASE-14850.v5.patch, 
> HBASE-18061.HBASE-14850.v6.patch, HBASE-18061.HBASE-14850.v7.patch, 
> hbase-18061-v8.patch
>
>
> HBASE-17576 adds multi-gets. There are a couple of todos to fix in the retry 
> logic, and some unit testing to be done for the multi-gets. We'll do these in 
> this issue. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086837#comment-16086837
 ] 

Ted Yu commented on HBASE-18379:


We can catch Exception for try block surrounding getCompletedSnapshots() call.
In the catch block, we log the exception and tell the admin to perform manual 
validation afterwards.

What do you think ?

> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot
> 
>
> Key: HBASE-18379
> URL: https://issues.apache.org/jira/browse/HBASE-18379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following was observed by a customer which prevented master from coming 
> up:
> {code}
> 2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
> Failed to become active master
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: Daily_Snapshot_Apps_2017-xx
> at org.apache.hadoop.fs.Path.initialize(Path.java:205)
> at org.apache.hadoop.fs.Path.(Path.java:171)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
> at 
> org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> Daily_Snapshot_Apps_2017-xx
> at java.net.URI.checkPath(URI.java:1823)
> at java.net.URI.(URI.java:745)
> at org.apache.hadoop.fs.Path.initialize(Path.java:202)
> {code}
> Turns out the exception can be reproduced using hdfs command line accessing 
> .snapshot directory.
> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086833#comment-16086833
 ] 

Jingcheng Du commented on HBASE-18379:
--

It is the place. But the thrown exception is IllegalArgumentException which is 
a RuntimeException.
So we need catch IOE and this RuntimeException at this time, and add another 
RuntimeException when we find in the future? Or directly catch an Exception 
here?
I am fine to catch exceptions here for checking the snapshots in the old 
snapshot directory. But cannot confirm if the change is necessary for the path 
issues.
Your ideas? Thanks.

> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot
> 
>
> Key: HBASE-18379
> URL: https://issues.apache.org/jira/browse/HBASE-18379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following was observed by a customer which prevented master from coming 
> up:
> {code}
> 2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
> Failed to become active master
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: Daily_Snapshot_Apps_2017-xx
> at org.apache.hadoop.fs.Path.initialize(Path.java:205)
> at org.apache.hadoop.fs.Path.(Path.java:171)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
> at 
> org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> Daily_Snapshot_Apps_2017-xx
> at java.net.URI.checkPath(URI.java:1823)
> at java.net.URI.(URI.java:745)
> at org.apache.hadoop.fs.Path.initialize(Path.java:202)
> {code}
> Turns out the exception can be reproduced using hdfs command line accessing 
> .snapshot directory.
> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18061) [C++] Fix retry logic in multi-get calls

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086830#comment-16086830
 ] 

Hadoop QA commented on HBASE-18061:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HBASE-18061 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.4.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877223/hbase-18061-v8.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7654/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> [C++] Fix retry logic in multi-get calls
> 
>
> Key: HBASE-18061
> URL: https://issues.apache.org/jira/browse/HBASE-18061
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Sudeep Sunthankar
> Fix For: HBASE-14850
>
> Attachments: HBASE-18061.HBASE-14850.v1.patch, 
> HBASE-18061.HBASE-14850.v3.patch, HBASE-18061.HBASE-14850.v5.patch, 
> HBASE-18061.HBASE-14850.v6.patch, HBASE-18061.HBASE-14850.v7.patch, 
> hbase-18061-v8.patch
>
>
> HBASE-17576 adds multi-gets. There are a couple of todos to fix in the retry 
> logic, and some unit testing to be done for the multi-gets. We'll do these in 
> this issue. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18061) [C++] Fix retry logic in multi-get calls

2017-07-13 Thread Sudeep Sunthankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sudeep Sunthankar updated HBASE-18061:
--
Attachment: hbase-18061-v8.patch

Updated patch with resolution of rebaisng and compilation issues.

Thanks

> [C++] Fix retry logic in multi-get calls
> 
>
> Key: HBASE-18061
> URL: https://issues.apache.org/jira/browse/HBASE-18061
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Sudeep Sunthankar
> Fix For: HBASE-14850
>
> Attachments: HBASE-18061.HBASE-14850.v1.patch, 
> HBASE-18061.HBASE-14850.v3.patch, HBASE-18061.HBASE-14850.v5.patch, 
> HBASE-18061.HBASE-14850.v6.patch, HBASE-18061.HBASE-14850.v7.patch, 
> hbase-18061-v8.patch
>
>
> HBASE-17576 adds multi-gets. There are a couple of todos to fix in the retry 
> logic, and some unit testing to be done for the multi-gets. We'll do these in 
> this issue. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18342) Add coprocessor service support for async admin

2017-07-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18342:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2.

> Add coprocessor service support for async admin
> ---
>
> Key: HBASE-18342
> URL: https://issues.apache.org/jira/browse/HBASE-18342
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18342.master.001.patch, 
> HBASE-18342.master.002.patch, HBASE-18342.master.003.patch, 
> HBASE-18342.master.003.patch, HBASE-18342.master.004.patch, 
> HBASE-18342.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086805#comment-16086805
 ] 

Ted Yu commented on HBASE-18379:


When there is no exception, the following code would be executed:
{code}
if (ss != null && !ss.isEmpty()) {
  LOG.error("Snapshots from an earlier release were found under: " + 
oldSnapshotDir);
  LOG.error("Please rename the directory as " + 
HConstants.SNAPSHOT_DIR_NAME);
}
{code}
That is, there is no automatic renaming.
In case of seeing URISyntaxException (or other exception from hdfs), we can 
catch the exception and log correspondingly.

> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot
> 
>
> Key: HBASE-18379
> URL: https://issues.apache.org/jira/browse/HBASE-18379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following was observed by a customer which prevented master from coming 
> up:
> {code}
> 2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
> Failed to become active master
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: Daily_Snapshot_Apps_2017-xx
> at org.apache.hadoop.fs.Path.initialize(Path.java:205)
> at org.apache.hadoop.fs.Path.(Path.java:171)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
> at 
> org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> Daily_Snapshot_Apps_2017-xx
> at java.net.URI.checkPath(URI.java:1823)
> at java.net.URI.(URI.java:745)
> at org.apache.hadoop.fs.Path.initialize(Path.java:202)
> {code}
> Turns out the exception can be reproduced using hdfs command line accessing 
> .snapshot directory.
> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17442) Move most of the replication related classes to hbase-server package

2017-07-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17442:
---
Attachment: HBASE-17442.v3.patch

Attach a v3 patch. I keep the ReplicationAdmin in hbase-client package.There 
are two methods: peerAdded and listReplicationPeers, which need too much 
replication implementation details. So I changed them in ReplicationAdmin. It 
is ok to modify public methods when we release a major version 2.0? [~stack]

> Move most of the replication related classes to hbase-server package
> 
>
> Key: HBASE-17442
> URL: https://issues.apache.org/jira/browse/HBASE-17442
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Replication
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 0001-hbase-replication-module.patch, 
> HBASE-17442.v1.patch, HBASE-17442.v2.patch, HBASE-17442.v2.patch, 
> HBASE-17442.v3.patch
>
>
> After the replication requests are routed through master, replication 
> implementation details didn't need be exposed to client. We should move most 
> of the replication related classes to hbase-server package.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18342) Add coprocessor service support for async admin

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086796#comment-16086796
 ] 

Hadoop QA commented on HBASE-18342:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hbase-endpoint in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18342 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877216/HBASE-18342.master.005.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux c680d3b4bd44 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9e0f450 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7652/testReport/ |
| modules | C: hbase-client hbase-endpoint U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7652/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was automatically generated.



> Add coprocessor service support 

[jira] [Comment Edited] (HBASE-17738) BucketCache startup is slow

2017-07-13 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083877#comment-16083877
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-17738 at 7/14/17 3:38 AM:
-

bq.The math in calculating the number of BBs per thread seems some thing wrong.
I agree. Thanks for the catch. The change was done in my other box and was not 
put up in my other place. Good catch.
bq.ets make the Callable return a BB[] so that u can avoid the start and end 
index math and changing a shared variable from diff threads.
Ok. I will take up ur suggestion that we discussed internally.


was (Author: ram_krish):
bq.The math in calculating the number of BBs per thread seems some thing wrong.
I agree. Thanks for the catch. The change was done in my other box and was not 
put up in my other place. Good catch.
bq.ets make the Callable return a BB[] so that u can avoid the start and end 
index math and changing a shared variable from diff threads.
Ok. I will take up ur suggestion and the patch that you gave me here.

> BucketCache startup is slow
> ---
>
> Key: HBASE-17738
> URL: https://issues.apache.org/jira/browse/HBASE-17738
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-17738_2.patch, HBASE-17738_2.patch, 
> HBASE-17738_3.patch, HBASE-17738_4.patch, HBASE-17738_5_withoutUnsafe.patch, 
> HBASE-17738_6_withoutUnsafe.patch, HBASE-17738.patch
>
>
> If you set bucketcache size at 64G say and then start hbase, it takes a long 
> time. Can we do the allocations in parallel and not inline with the server 
> startup?
> Related, prefetching on a bucketcache is slow. Speed it up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18342) Add coprocessor service support for async admin

2017-07-13 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086765#comment-16086765
 ] 

Guanghao Zhang commented on HBASE-18342:


Rebase the latest master code. Thanks [~Apache9] for reviewing. Will commit it 
shortly.

> Add coprocessor service support for async admin
> ---
>
> Key: HBASE-18342
> URL: https://issues.apache.org/jira/browse/HBASE-18342
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18342.master.001.patch, 
> HBASE-18342.master.002.patch, HBASE-18342.master.003.patch, 
> HBASE-18342.master.003.patch, HBASE-18342.master.004.patch, 
> HBASE-18342.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086763#comment-16086763
 ] 

Ted Yu commented on HBASE-18379:


This happened with hbase running on Isilon where a stray hdfs snapshot got in 
the way - SnapshotManager tried to scan rootdir/.snapshot but encountered the 
exception.

In my opinion, hbase should be more robust in such scenario.

> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot
> 
>
> Key: HBASE-18379
> URL: https://issues.apache.org/jira/browse/HBASE-18379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following was observed by a customer which prevented master from coming 
> up:
> {code}
> 2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
> Failed to become active master
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: Daily_Snapshot_Apps_2017-xx
> at org.apache.hadoop.fs.Path.initialize(Path.java:205)
> at org.apache.hadoop.fs.Path.(Path.java:171)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
> at 
> org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> Daily_Snapshot_Apps_2017-xx
> at java.net.URI.checkPath(URI.java:1823)
> at java.net.URI.(URI.java:745)
> at org.apache.hadoop.fs.Path.initialize(Path.java:202)
> {code}
> Turns out the exception can be reproduced using hdfs command line accessing 
> .snapshot directory.
> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18342) Add coprocessor service support for async admin

2017-07-13 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18342:
---
Attachment: HBASE-18342.master.005.patch

> Add coprocessor service support for async admin
> ---
>
> Key: HBASE-18342
> URL: https://issues.apache.org/jira/browse/HBASE-18342
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18342.master.001.patch, 
> HBASE-18342.master.002.patch, HBASE-18342.master.003.patch, 
> HBASE-18342.master.003.patch, HBASE-18342.master.004.patch, 
> HBASE-18342.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086759#comment-16086759
 ] 

Jingcheng Du edited comment on HBASE-18379 at 7/14/17 2:52 AM:
---

Thanks [~te...@apache.org].
I checked the code in URI, this exception happens only when it has a schema, 
but the left part doesn't start with /.
According to the stack trace, the whole path is Daily_Snapshot_Apps_2017-xx, no 
any schema (the part before ":"), this should pass the new Path step.
Do you have ideas why this happens? This just tries to create a path instance.



was (Author: jingcheng.du):
Thanks [~te...@apache.org].
I checked the code in URI, this exception happens only when it has a schema, 
but the left part doesn't start with /.
According to the stack trace, the whole path is Daily_Snapshot_Apps_2017-xx, no 
any schema (the part before ":"), this should can pass the new Path step.
Do you have ideas why this happens? This just tries to create a path instance.


> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot
> 
>
> Key: HBASE-18379
> URL: https://issues.apache.org/jira/browse/HBASE-18379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following was observed by a customer which prevented master from coming 
> up:
> {code}
> 2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
> Failed to become active master
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: Daily_Snapshot_Apps_2017-xx
> at org.apache.hadoop.fs.Path.initialize(Path.java:205)
> at org.apache.hadoop.fs.Path.(Path.java:171)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
> at 
> org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> Daily_Snapshot_Apps_2017-xx
> at java.net.URI.checkPath(URI.java:1823)
> at java.net.URI.(URI.java:745)
> at org.apache.hadoop.fs.Path.initialize(Path.java:202)
> {code}
> Turns out the exception can be reproduced using hdfs command line accessing 
> .snapshot directory.
> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086759#comment-16086759
 ] 

Jingcheng Du commented on HBASE-18379:
--

Thanks [~te...@apache.org].
I checked the code in URI, this exception happens only when it has a schema, 
but the left part doesn't start with /.
According to the stack trace, the whole path is Daily_Snapshot_Apps_2017-xx, no 
any schema (the part before ":"), this should can pass the new Path step.
Do you have ideas why this happens? This just tries to create a path instance.


> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot
> 
>
> Key: HBASE-18379
> URL: https://issues.apache.org/jira/browse/HBASE-18379
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following was observed by a customer which prevented master from coming 
> up:
> {code}
> 2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
> Failed to become active master
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: Daily_Snapshot_Apps_2017-xx
> at org.apache.hadoop.fs.Path.initialize(Path.java:205)
> at org.apache.hadoop.fs.Path.(Path.java:171)
> at org.apache.hadoop.fs.Path.(Path.java:93)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
> at 
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
> at 
> org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
> at 
> org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
> at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
> at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
> at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> Daily_Snapshot_Apps_2017-xx
> at java.net.URI.checkPath(URI.java:1823)
> at java.net.URI.(URI.java:745)
> at org.apache.hadoop.fs.Path.initialize(Path.java:202)
> {code}
> Turns out the exception can be reproduced using hdfs command line accessing 
> .snapshot directory.
> SnapshotManager#checkSnapshotSupport() should better handle malfunctioning 
> hdfs snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086752#comment-16086752
 ] 

Ted Yu commented on HBASE-18378:


I haven't found an existing JIRA after a quick search.

Let me know if hbase side can be improved.

> Cloning configuration contained in CoprocessorEnvironment doesn't work
> --
>
> Key: HBASE-18378
> URL: https://issues.apache.org/jira/browse/HBASE-18378
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In our phoenix co-processors, we need to clone configuration passed in 
> CoprocessorEnvironment.
> However, using the copy constructor declared in it's parent class, 
> Configuration, doesn't copy over anything.
> For example:
> {code}
> CorpocessorEnvironment e
> Configuration original = e.getConfiguration();
> Configuration clone = new Configuration(original);
> clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
> e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
> HConstants.DEFAULT_ZK_SESSION_TIMEOUT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086738#comment-16086738
 ] 

Hadoop QA commented on HBASE-18377:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
3s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
10s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 51s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}141m 
12s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877116/18377.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux fe306b30fea7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9e0f450 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7651/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7651/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7651/console |
| Powered by | 

[jira] [Commented] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086693#comment-16086693
 ] 

Mike Drob commented on HBASE-18175:
---

To clarify, I think you've iterated a lot on this patch already and I don't 
want to see it continue to get held up for minor issues. IMO it's not worth 
spending a ton of time cleaning up the pom or project structure since it might 
all be drastically different soon.

> Add hbase-spark integration test into hbase-spark-it
> 
>
> Key: HBASE-18175
> URL: https://issues.apache.org/jira/browse/HBASE-18175
> Project: HBase
>  Issue Type: Test
>  Components: spark
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-18175-master-v2.patch, 
> hbase-18175-master-v3.patch, hbase-18175-master-v4.patch, 
> hbase-18175-master-v5.patch, hbase-18175-master-v6.patch, 
> hbase-18175-master-v7.patch, hbase-18175-master-v8.patch, 
> hbase-18175-master-v9.patch, hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and 
> this jira will add integration test of hbase-spark into hbase-it.  This patch 
> run same tests as mapreduce.IntegrationTestBulkLoad, just change mapreduce to 
> spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class 
> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
> HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar 
> -Dhbase.spark.bulkload.chainlength=50 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086689#comment-16086689
 ] 

James Taylor commented on HBASE-18378:
--

Would you consider this an HDFS bug and if so was a JIRA ever filed, [~tedyu]? 
We'd definitely prefer not having the cost of the deep merge (though we've 
minimized it by holding on to one single modified config instance).

> Cloning configuration contained in CoprocessorEnvironment doesn't work
> --
>
> Key: HBASE-18378
> URL: https://issues.apache.org/jira/browse/HBASE-18378
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In our phoenix co-processors, we need to clone configuration passed in 
> CoprocessorEnvironment.
> However, using the copy constructor declared in it's parent class, 
> Configuration, doesn't copy over anything.
> For example:
> {code}
> CorpocessorEnvironment e
> Configuration original = e.getConfiguration();
> Configuration clone = new Configuration(original);
> clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
> e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
> HConstants.DEFAULT_ZK_SESSION_TIMEOUT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086652#comment-16086652
 ] 

Hudson commented on HBASE-17922:


FAILURE: Integrated in Jenkins build HBase-2.0 #168 (See 
[https://builds.apache.org/job/HBase-2.0/168/])
HBASE-17922 Clean TestRegionServerHostname for hadoop3. (appy: rev 
246d42297bd5e7ef828374c7a80b36a8a2cdbe7c)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerHostname.java


> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086646#comment-16086646
 ] 

Hudson commented on HBASE-17922:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3370 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3370/])
HBASE-17922 Clean TestRegionServerHostname for hadoop3. (appy: rev 
9e0f450c0ca732a9634e2147c2e0d7b885eca9cc)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerHostname.java


> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18338) [C++] Implement RpcTestServer

2017-07-13 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086635#comment-16086635
 ] 

Xiaobing Zhou commented on HBASE-18338:
---

Posted v3:
# implemented Echo, however, there's no dispatch code to easily add other test 
functions
# added unit test for Echo, see also hbase-rpc-test.cc

Will add dispatch code based on method name for next and do some clean work.

> [C++] Implement RpcTestServer
> -
>
> Key: HBASE-18338
> URL: https://issues.apache.org/jira/browse/HBASE-18338
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18338.000.patch, HBASE-18338.001.patch, 
> HBASE-18338.002.patch, HBASE-18338.003.patch
>
>
> This is a spin-off from HBASE-18078. We need RpcTestServer to simulate 
> various communication scenarios, e.g. timeout, connection aborted, long 
> running services and so on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18338) [C++] Implement RpcTestServer

2017-07-13 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HBASE-18338:
--
Attachment: HBASE-18338.003.patch

> [C++] Implement RpcTestServer
> -
>
> Key: HBASE-18338
> URL: https://issues.apache.org/jira/browse/HBASE-18338
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HBASE-18338.000.patch, HBASE-18338.001.patch, 
> HBASE-18338.002.patch, HBASE-18338.003.patch
>
>
> This is a spin-off from HBASE-18078. We need RpcTestServer to simulate 
> various communication scenarios, e.g. timeout, connection aborted, long 
> running services and so on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086619#comment-16086619
 ] 

Ted Yu commented on HBASE-18378:


Maybe add a variant of create() which skips changing class loader.

> Cloning configuration contained in CoprocessorEnvironment doesn't work
> --
>
> Key: HBASE-18378
> URL: https://issues.apache.org/jira/browse/HBASE-18378
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In our phoenix co-processors, we need to clone configuration passed in 
> CoprocessorEnvironment.
> However, using the copy constructor declared in it's parent class, 
> Configuration, doesn't copy over anything.
> For example:
> {code}
> CorpocessorEnvironment e
> Configuration original = e.getConfiguration();
> Configuration clone = new Configuration(original);
> clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
> e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
> HConstants.DEFAULT_ZK_SESSION_TIMEOUT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086615#comment-16086615
 ] 

Samarth Jain commented on HBASE-18378:
--

Using the HBaseConfiguration method could have unintended side-effects. For ex, 
the HBaseConfiguration#create() method adds the 
HBaseConfiguration.class.getClassLoader().

{code}
  /**
   * Creates a Configuration with HBase resources
   * @return a Configuration with HBase resources
   */
  public static Configuration create() {
Configuration conf = new Configuration();
// In case HBaseConfiguration is loaded from a different classloader than
// Configuration, conf needs to be set with appropriate class loader to 
resolve
// HBase resources.
conf.setClassLoader(HBaseConfiguration.class.getClassLoader());
return addHbaseResources(conf);
  }
{code}

So if I used 
{code}
public static Configuration create(final Configuration that)
{code}
then the config returned by the above method would have the class loader set.




> Cloning configuration contained in CoprocessorEnvironment doesn't work
> --
>
> Key: HBASE-18378
> URL: https://issues.apache.org/jira/browse/HBASE-18378
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In our phoenix co-processors, we need to clone configuration passed in 
> CoprocessorEnvironment.
> However, using the copy constructor declared in it's parent class, 
> Configuration, doesn't copy over anything.
> For example:
> {code}
> CorpocessorEnvironment e
> Configuration original = e.getConfiguration();
> Configuration clone = new Configuration(original);
> clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
> e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
> HConstants.DEFAULT_ZK_SESSION_TIMEOUT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086610#comment-16086610
 ] 

Ted Yu commented on HBASE-18378:


Deep copy (merge(Configuration destConf, Configuration srcConf)) has its cost.

I guess we cannot turn on deep copy by default in copy constructor.

> Cloning configuration contained in CoprocessorEnvironment doesn't work
> --
>
> Key: HBASE-18378
> URL: https://issues.apache.org/jira/browse/HBASE-18378
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In our phoenix co-processors, we need to clone configuration passed in 
> CoprocessorEnvironment.
> However, using the copy constructor declared in it's parent class, 
> Configuration, doesn't copy over anything.
> For example:
> {code}
> CorpocessorEnvironment e
> Configuration original = e.getConfiguration();
> Configuration clone = new Configuration(original);
> clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
> e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
> HConstants.DEFAULT_ZK_SESSION_TIMEOUT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086606#comment-16086606
 ] 

Samarth Jain commented on HBASE-18378:
--

Thanks for the comment, [~tedyu]. While it might just work out in our case, it 
seems a bit odd to me that for copying a CompoundConfiguration I need to use a 
method of it's sibling class, HBaseConfiguration.

> Cloning configuration contained in CoprocessorEnvironment doesn't work
> --
>
> Key: HBASE-18378
> URL: https://issues.apache.org/jira/browse/HBASE-18378
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In our phoenix co-processors, we need to clone configuration passed in 
> CoprocessorEnvironment.
> However, using the copy constructor declared in it's parent class, 
> Configuration, doesn't copy over anything.
> For example:
> {code}
> CorpocessorEnvironment e
> Configuration original = e.getConfiguration();
> Configuration clone = new Configuration(original);
> clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
> e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
> HConstants.DEFAULT_ZK_SESSION_TIMEOUT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it

2017-07-13 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086579#comment-16086579
 ] 

Yi Liang commented on HBASE-18175:
--

{quote}Please file a follow on JIRA to combine the configuration options for 
failsafe plugin between hbase-it and hbase-spark-it (possibly in parent, not 
sure). {quote}
will file one after investigating. 

> Add hbase-spark integration test into hbase-spark-it
> 
>
> Key: HBASE-18175
> URL: https://issues.apache.org/jira/browse/HBASE-18175
> Project: HBase
>  Issue Type: Test
>  Components: spark
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-18175-master-v2.patch, 
> hbase-18175-master-v3.patch, hbase-18175-master-v4.patch, 
> hbase-18175-master-v5.patch, hbase-18175-master-v6.patch, 
> hbase-18175-master-v7.patch, hbase-18175-master-v8.patch, 
> hbase-18175-master-v9.patch, hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and 
> this jira will add integration test of hbase-spark into hbase-it.  This patch 
> run same tests as mapreduce.IntegrationTestBulkLoad, just change mapreduce to 
> spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class 
> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
> HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar 
> -Dhbase.spark.bulkload.chainlength=50 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18376) Flaky exclusion doesn't appear to work in precommit

2017-07-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086553#comment-16086553
 ] 

Appy edited comment on HBASE-18376 at 7/13/17 11:11 PM:


Started a run with DEBUG on. 
https://builds.apache.org/job/PreCommit-HBASE-Build/7651/


was (Author: appy):
Started a run with DEBUG on. 
https://builds.apache.org/job/PreCommit-HBASE-Build/7650/

> Flaky exclusion doesn't appear to work in precommit
> ---
>
> Key: HBASE-18376
> URL: https://issues.apache.org/jira/browse/HBASE-18376
> Project: HBase
>  Issue Type: Bug
>  Components: community, test
>Reporter: Sean Busbey
>Priority: Critical
>
> Yesterday we started defaulting the precommit parameter for the flaky test 
> list to point to the job on builds.a.o. Looks like the personality is 
> ignoring it.
> example build that's marked to keep:
> https://builds.apache.org/job/PreCommit-HBASE-Build/7646/
> (search for 'Running unit tests' to skip to the right part of the console')
> should add some more debug output in there too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18376) Flaky exclusion doesn't appear to work in precommit

2017-07-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086553#comment-16086553
 ] 

Appy commented on HBASE-18376:
--

Started a run with DEBUG on. 
https://builds.apache.org/job/PreCommit-HBASE-Build/7650/

> Flaky exclusion doesn't appear to work in precommit
> ---
>
> Key: HBASE-18376
> URL: https://issues.apache.org/jira/browse/HBASE-18376
> Project: HBase
>  Issue Type: Bug
>  Components: community, test
>Reporter: Sean Busbey
>Priority: Critical
>
> Yesterday we started defaulting the precommit parameter for the flaky test 
> list to point to the job on builds.a.o. Looks like the personality is 
> ignoring it.
> example build that's marked to keep:
> https://builds.apache.org/job/PreCommit-HBASE-Build/7646/
> (search for 'Running unit tests' to skip to the right part of the console')
> should add some more debug output in there too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18147) nightly job to check health of active branches

2017-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086547#comment-16086547
 ] 

Sean Busbey commented on HBASE-18147:
-

poop. lost my branch-1.2 output when I moved to branch-1 instead. I'll link 
output once we have some tonight.

> nightly job to check health of active branches
> --
>
> Key: HBASE-18147
> URL: https://issues.apache.org/jira/browse/HBASE-18147
> Project: HBase
>  Issue Type: Test
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-18147.0.patch, HBASE-18147-branch-1.v1.patch, 
> HBASE-18147.v1.patch
>
>
> We should set up a job that runs Apache Yetus Test Patch's nightly mode. 
> Essentially, it produces a report that considers how the branch measures up 
> against the things we check in our precommit checks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18147) nightly job to check health of active branches

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086472#comment-16086472
 ] 

Mike Drob commented on HBASE-18147:
---

Can you point us at a specific build number that has sample output? I tried 
clicking through a few but they all looked like work-in-progress failures.

> nightly job to check health of active branches
> --
>
> Key: HBASE-18147
> URL: https://issues.apache.org/jira/browse/HBASE-18147
> Project: HBase
>  Issue Type: Test
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-18147.0.patch, HBASE-18147-branch-1.v1.patch, 
> HBASE-18147.v1.patch
>
>
> We should set up a job that runs Apache Yetus Test Patch's nightly mode. 
> Essentially, it produces a report that considers how the branch measures up 
> against the things we check in our precommit checks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18366) Fix flaky test hbase.master.procedure.TestServerCrashProcedure#testRecoveryAndDoubleExecutionOnRsWithMeta

2017-07-13 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086471#comment-16086471
 ] 

Umesh Agashe commented on HBASE-18366:
--

Hi [~stack], [~yangzhe1991]:

FWICS here is the root cause:

The UT tests ServerCrashProcedure when RS carrying meta region crashes. It also 
simulates master crash after executing each step in the procedure.

Initially all RS are at the same version i.e. 3.0.0-SNAPSHOT. 
HMaster.getRegionServerVersion() returns version 0.0.0 for dead RS (carrying 
meta). This makes AssignmentManager.getExcludedServersForSystemTable() return 
non-empty list and the logic in 
AssignmentManager.checkIfShouldMoveSystemRegionAsync() is triggered which in 
turn submits MoveRegionProcedure to move meta region from RS with version 0.0.0 
to one of other RS with latest version.

As commented before this causes race condition between scan and 
MoveRegionProcedure.

AssignmentManager.getExcludedServersForSystemTable() uses 
master.getServerManager().getOnlineServersList() to get list of online servers 
only. But on further scrutiny of code and logs I found that server can be 
online and dead at the same time!

IMO, 
* Currently meta is re/assigned from ServerCrashProcedure, during master 
initialization from MasterMetaBootstrap and followed by in 
checkIfShouldMoveSystemRegionAsync().
* that means meta re/assignment may be attempted at max 3 times in certain 
conditions.
* I am working on HBASE-18261 to have meta recovery/ assignment logic at one 
place.
* I think we can pull these changes for assigning meta to RS with highest 
version number there.
* This will result in, RS with highest version number will be considered for 
meta region assignment when:
# When meta region carrying RS crashes
# During master startup

Along with above changes, obviously we need to fix 
ServerManager.isServerOnline() and ServerManager.isServerDead() returning true 
at the same time. This could be result of test code simulating crash but the 
class itself should not allow this case (IMHO).

I have a following fix ready (and tested) which will fix the test but I don't 
consider it a long term fix.
{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
index 046612a..1a2d53b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
@@ -1760,6 +1760,7 @@ public class AssignmentManager implements ServerListener {
   public List getExcludedServersForSystemTable() {
 List> serverList = 
master.getServerManager().getOnlineServersList()
 .stream()
+.filter((s)->!master.getServerManager().isServerDead(s))
 .map((s)->new Pair<>(s, master.getRegionServerVersion(s)))
 .collect(Collectors.toList());
 if (serverList.isEmpty()) {
{code}

[~stack], as you have suggested, we can disable the test for now. When we agree 
on fix, we can enable it. Let me know your thoughts. Thanks!

> Fix flaky test 
> hbase.master.procedure.TestServerCrashProcedure#testRecoveryAndDoubleExecutionOnRsWithMeta
> -
>
> Key: HBASE-18366
> URL: https://issues.apache.org/jira/browse/HBASE-18366
> Project: HBase
>  Issue Type: Bug
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>
> It worked for a few days after enabling it with HBASE-18278. But started 
> failing after commits:
> 6786b2b
> 68436c9
> 75d2eca
> 50bb045
> df93c13
> It works with one commit before: c5abb6c. Need to see what changed with those 
> commits.
> Currently it fails with TableNotFoundException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-18261) [AMv2] Create new RecoverMetaProcedure and use it from ServerCrashProcedure and HMaster.finishActiveMasterInitialization()

2017-07-13 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18261:
-
Comment: was deleted

(was: Hi [~stack], [~yangzhe1991]:

FWICS here is the root cause:

The UT tests ServerCrashProcedure when RS carrying meta region crashes. It also 
simulates master crash after executing each step in the procedure.

Initially all RS are at the same version i.e. 3.0.0-SNAPSHOT. 
HMaster.getRegionServerVersion() returns version 0.0.0 for dead RS (carrying 
meta). This makes AssignmentManager.getExcludedServersForSystemTable() return 
non-empty list and the logic in 
AssignmentManager.checkIfShouldMoveSystemRegionAsync() is triggered which in 
turn submits MoveRegionProcedure to move meta region from RS with version 0.0.0 
to one of other RS with latest version.

As commented before this causes race condition between scan and 
MoveRegionProcedure.

AssignmentManager.getExcludedServersForSystemTable() uses 
master.getServerManager().getOnlineServersList() to get list of online servers 
only. But on further scrutiny of code and logs I found that server can be 
online and dead at the same time!

IMO, 
* Currently meta is re/assigned from ServerCrashProcedure, during master 
initialization from MasterMetaBootstrap and followed by in 
checkIfShouldMoveSystemRegionAsync().
* that means meta re/assignment may be attempted at max 3 times in certain 
conditions.
* I am working on HBASE-18261 to have meta recovery/ assignment logic at one 
place.
* I think we can pull these changes for assigning meta to RS with highest 
version number there.
* This will result in, RS with highest version number will be considered for 
meta region assignment when:
# When meta region carrying RS crashes
# During master startup

Along with above changes, obviously we need to fix 
ServerManager.isServerOnline() and ServerManager.isServerDead() returning true 
at the same time. This could be result of test code simulating crash but the 
class itself should not allow this case (IMHO).

I have a following fix ready (and tested) which will fix the test but I don't 
consider it a long term fix.
{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
index 046612a..1a2d53b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
@@ -1760,6 +1760,7 @@ public class AssignmentManager implements ServerListener {
   public List getExcludedServersForSystemTable() {
 List> serverList = 
master.getServerManager().getOnlineServersList()
 .stream()
+.filter((s)->!master.getServerManager().isServerDead(s))
 .map((s)->new Pair<>(s, master.getRegionServerVersion(s)))
 .collect(Collectors.toList());
 if (serverList.isEmpty()) {
{code}

[~stack], as you have suggested, we can disable the test for now. When we agree 
on fix, we can enable it. Let me know your thoughts. Thanks!)

> [AMv2] Create new RecoverMetaProcedure and use it from ServerCrashProcedure 
> and HMaster.finishActiveMasterInitialization()
> --
>
> Key: HBASE-18261
> URL: https://issues.apache.org/jira/browse/HBASE-18261
> Project: HBase
>  Issue Type: Improvement
>  Components: amv2
>Affects Versions: 2.0.0-alpha-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0-alpha-2
>
> Attachments: HBASE-18261.master.001.patch
>
>
> When unit test 
> hbase.master.procedure.TestServerCrashProcedure#testRecoveryAndDoubleExecutionOnRsWithMeta()
>  is enabled and run several times, it fails intermittently. Cause is meta 
> recovery is done at two different places:
> * ServerCrashProcedure.processMeta()
> * HMaster.finishActiveMasterInitialization()
> and its not coordinated.
> When HMaster.finishActiveMasterInitialization() gets to submit splitMetaLog() 
> first and while its running call from ServerCrashProcedure.processMeta() 
> fails causing step to be retried again in a loop.
> When ServerCrashProcedure.processMeta() submits splitMetaLog after 
> splitMetaLog from HMaster.finishActiveMasterInitialization() is finished, 
> success is returned without doing any work.
> But if ServerCrashProcedure.processMeta() submits splitMetaLog request and 
> while its going HMaster.finishActiveMasterInitialization() submits it test 
> fails with exception.
> [~stack] and I discussed the possible solution:
> Create RecoverMetaProcedure and call it where required. Procedure framework 
> provides mutual 

[jira] [Commented] (HBASE-18261) [AMv2] Create new RecoverMetaProcedure and use it from ServerCrashProcedure and HMaster.finishActiveMasterInitialization()

2017-07-13 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086469#comment-16086469
 ] 

Umesh Agashe commented on HBASE-18261:
--

Hi [~stack], [~yangzhe1991]:

FWICS here is the root cause:

The UT tests ServerCrashProcedure when RS carrying meta region crashes. It also 
simulates master crash after executing each step in the procedure.

Initially all RS are at the same version i.e. 3.0.0-SNAPSHOT. 
HMaster.getRegionServerVersion() returns version 0.0.0 for dead RS (carrying 
meta). This makes AssignmentManager.getExcludedServersForSystemTable() return 
non-empty list and the logic in 
AssignmentManager.checkIfShouldMoveSystemRegionAsync() is triggered which in 
turn submits MoveRegionProcedure to move meta region from RS with version 0.0.0 
to one of other RS with latest version.

As commented before this causes race condition between scan and 
MoveRegionProcedure.

AssignmentManager.getExcludedServersForSystemTable() uses 
master.getServerManager().getOnlineServersList() to get list of online servers 
only. But on further scrutiny of code and logs I found that server can be 
online and dead at the same time!

IMO, 
* Currently meta is re/assigned from ServerCrashProcedure, during master 
initialization from MasterMetaBootstrap and followed by in 
checkIfShouldMoveSystemRegionAsync().
* that means meta re/assignment may be attempted at max 3 times in certain 
conditions.
* I am working on HBASE-18261 to have meta recovery/ assignment logic at one 
place.
* I think we can pull these changes for assigning meta to RS with highest 
version number there.
* This will result in, RS with highest version number will be considered for 
meta region assignment when:
# When meta region carrying RS crashes
# During master startup

Along with above changes, obviously we need to fix 
ServerManager.isServerOnline() and ServerManager.isServerDead() returning true 
at the same time. This could be result of test code simulating crash but the 
class itself should not allow this case (IMHO).

I have a following fix ready (and tested) which will fix the test but I don't 
consider it a long term fix.
{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
index 046612a..1a2d53b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
@@ -1760,6 +1760,7 @@ public class AssignmentManager implements ServerListener {
   public List getExcludedServersForSystemTable() {
 List> serverList = 
master.getServerManager().getOnlineServersList()
 .stream()
+.filter((s)->!master.getServerManager().isServerDead(s))
 .map((s)->new Pair<>(s, master.getRegionServerVersion(s)))
 .collect(Collectors.toList());
 if (serverList.isEmpty()) {
{code}

[~stack], as you have suggested, we can disable the test for now. When we agree 
on fix, we can enable it. Let me know your thoughts. Thanks!

> [AMv2] Create new RecoverMetaProcedure and use it from ServerCrashProcedure 
> and HMaster.finishActiveMasterInitialization()
> --
>
> Key: HBASE-18261
> URL: https://issues.apache.org/jira/browse/HBASE-18261
> Project: HBase
>  Issue Type: Improvement
>  Components: amv2
>Affects Versions: 2.0.0-alpha-1
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
> Fix For: 2.0.0-alpha-2
>
> Attachments: HBASE-18261.master.001.patch
>
>
> When unit test 
> hbase.master.procedure.TestServerCrashProcedure#testRecoveryAndDoubleExecutionOnRsWithMeta()
>  is enabled and run several times, it fails intermittently. Cause is meta 
> recovery is done at two different places:
> * ServerCrashProcedure.processMeta()
> * HMaster.finishActiveMasterInitialization()
> and its not coordinated.
> When HMaster.finishActiveMasterInitialization() gets to submit splitMetaLog() 
> first and while its running call from ServerCrashProcedure.processMeta() 
> fails causing step to be retried again in a loop.
> When ServerCrashProcedure.processMeta() submits splitMetaLog after 
> splitMetaLog from HMaster.finishActiveMasterInitialization() is finished, 
> success is returned without doing any work.
> But if ServerCrashProcedure.processMeta() submits splitMetaLog request and 
> while its going HMaster.finishActiveMasterInitialization() submits it test 
> fails with exception.
> [~stack] and I discussed the possible solution:
> Create RecoverMetaProcedure and call it where required. Procedure framework 
> provides 

[jira] [Commented] (HBASE-18147) nightly job to check health of active branches

2017-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086457#comment-16086457
 ] 

Sean Busbey commented on HBASE-18147:
-

Here's the job I have configured to currently just look at my feature branches:

https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/

> nightly job to check health of active branches
> --
>
> Key: HBASE-18147
> URL: https://issues.apache.org/jira/browse/HBASE-18147
> Project: HBase
>  Issue Type: Test
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-18147.0.patch, HBASE-18147-branch-1.v1.patch, 
> HBASE-18147.v1.patch
>
>
> We should set up a job that runs Apache Yetus Test Patch's nightly mode. 
> Essentially, it produces a report that considers how the branch measures up 
> against the things we check in our precommit checks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18147) nightly job to check health of active branches

2017-07-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-18147:

Attachment: HBASE-18147.v1.patch
HBASE-18147-branch-1.v1.patch

I've reworked things to use 
[Jenkinsfile|https://jenkins.io/doc/book/pipeline/jenkinsfile/] pipelines. 
Unfortunately, the pipelines are configured per-branch.

attaching a patch that'll work for master/branch-2 and another that will work 
for branch-1*.

This isn't all bad. At the moment our configs happen to work the same across 
all branches (if we used the branch-1 patch everywhere the extra multijdk dir 
would get ignored like it does on precommit), but they might not later. For 
example, if we want to keep which nightly checks cause failures in version 
control as we clean things up then this set up works well (the alternative is 
per-branch env variables in jenkins).

There are still some things I'd like to tweak over time and some things that I 
think are bugs in yetus. But things are working well enough that I think this 
is ready to go for a test run on our actual branches.

> nightly job to check health of active branches
> --
>
> Key: HBASE-18147
> URL: https://issues.apache.org/jira/browse/HBASE-18147
> Project: HBase
>  Issue Type: Test
>  Components: community, test
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HBASE-18147.0.patch, HBASE-18147-branch-1.v1.patch, 
> HBASE-18147.v1.patch
>
>
> We should set up a job that runs Apache Yetus Test Patch's nightly mode. 
> Essentially, it produces a report that considers how the branch measures up 
> against the things we check in our precommit checks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086393#comment-16086393
 ] 

Ted Yu commented on HBASE-18378:


Have you tried this method from HBaseConfiguration ?
{code}
  public static Configuration create(final Configuration that) {
{code}

> Cloning configuration contained in CoprocessorEnvironment doesn't work
> --
>
> Key: HBASE-18378
> URL: https://issues.apache.org/jira/browse/HBASE-18378
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In our phoenix co-processors, we need to clone configuration passed in 
> CoprocessorEnvironment.
> However, using the copy constructor declared in it's parent class, 
> Configuration, doesn't copy over anything.
> For example:
> {code}
> CorpocessorEnvironment e
> Configuration original = e.getConfiguration();
> Configuration clone = new Configuration(original);
> clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
> e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
> HConstants.DEFAULT_ZK_SESSION_TIMEOUT
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18379) SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs snapshot

2017-07-13 Thread Ted Yu (JIRA)
Ted Yu created HBASE-18379:
--

 Summary: SnapshotManager#checkSnapshotSupport() should better 
handle malfunctioning hdfs snapshot
 Key: HBASE-18379
 URL: https://issues.apache.org/jira/browse/HBASE-18379
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


The following was observed by a customer which prevented master from coming up:
{code}
2017-07-13 13:25:07,898 FATAL [xyz:16000.activeMasterManager] master.HMaster: 
Failed to become active master
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path 
in absolute URI: Daily_Snapshot_Apps_2017-xx
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.(Path.java:171)
at org.apache.hadoop.fs.Path.(Path.java:93)
at 
org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
at 
org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:911)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:113)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:966)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:962)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:962)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1534)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1574)
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.getCompletedSnapshots(SnapshotManager.java:206)
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.checkSnapshotSupport(SnapshotManager.java:1011)
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.initialize(SnapshotManager.java:1070)
at 
org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost.initialize(MasterProcedureManagerHost.java:50)
at 
org.apache.hadoop.hbase.master.HMaster.initializeZKBasedSystemTrackers(HMaster.java:667)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:732)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:213)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1863)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
Daily_Snapshot_Apps_2017-xx
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
{code}
Turns out the exception can be reproduced using hdfs command line accessing 
.snapshot directory.

SnapshotManager#checkSnapshotSupport() should better handle malfunctioning hdfs 
snapshot so that master starts up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086374#comment-16086374
 ] 

Hudson commented on HBASE-18339:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3369 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3369/])
HBASE-18339 Update to hadoop3-alpha4 (busbey: rev 
500592dfd0fb0446dc501d11ade0f3b3ddc49bd3)
* (edit) dev-support/hbase-personality.sh
* (edit) pom.xml


> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086357#comment-16086357
 ] 

Hudson commented on HBASE-18339:


FAILURE: Integrated in Jenkins build HBase-2.0 #167 (See 
[https://builds.apache.org/job/HBase-2.0/167/])
HBASE-18339 Update to hadoop3-alpha4 (busbey: rev 
e22f7bc893331325dae361301c8b491d6acd7185)
* (edit) pom.xml


> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086350#comment-16086350
 ] 

Ted Yu commented on HBASE-18377:


openReader(Path) is private and not static.
There is no easy way to directly inject RemoteException.

I searched for FileNotFoundException related tests under hbase-server : there 
was no replication test.

Any hint on how this can be tested 

> Error handling for FileNotFoundException should consider RemoteException in 
> ReplicationSource#openReader()
> --
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: 18377.branch-1.3.txt, 18377.v1.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086317#comment-16086317
 ] 

Hadoop QA commented on HBASE-18377:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
49s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
55s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m  4s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestServerCrashProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877116/18377.v1.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2f86b7a385ae 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 500592d |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC3 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7649/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7649/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-18303) Clean up some parameterized test declarations

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086316#comment-16086316
 ] 

Hadoop QA commented on HBASE-18303:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m  
0s{color} | {color:red} hbase-server in master has 9 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hbase-rest in master has 3 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 29s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hbase-prefix-tree in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}150m  
2s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
4s{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Issue | HBASE-18303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877109/HBASE-18303.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a6118068a29c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 

[jira] [Created] (HBASE-18378) Cloning configuration contained in CoprocessorEnvironment doesn't work

2017-07-13 Thread Samarth Jain (JIRA)
Samarth Jain created HBASE-18378:


 Summary: Cloning configuration contained in CoprocessorEnvironment 
doesn't work
 Key: HBASE-18378
 URL: https://issues.apache.org/jira/browse/HBASE-18378
 Project: HBase
  Issue Type: Bug
Reporter: Samarth Jain


In our phoenix co-processors, we need to clone configuration passed in 
CoprocessorEnvironment.
However, using the copy constructor declared in it's parent class, 
Configuration, doesn't copy over anything.

For example:
{code}
CorpocessorEnvironment e
Configuration original = e.getConfiguration();
Configuration clone = new Configuration(original);
clone.get(HConstants.ZK_SESSION_TIMEOUT) -> returns null
e.configuration.get(HConstants.ZK_SEESION_TIMEOUT) -> returns 
HConstants.DEFAULT_ZK_SESSION_TIMEOUT
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it

2017-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086228#comment-16086228
 ] 

Sean Busbey commented on HBASE-18175:
-

yeah, branch-1 is waiting for the scope doc from me summarizing the discuss 
thread.

> Add hbase-spark integration test into hbase-spark-it
> 
>
> Key: HBASE-18175
> URL: https://issues.apache.org/jira/browse/HBASE-18175
> Project: HBase
>  Issue Type: Test
>  Components: spark
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-18175-master-v2.patch, 
> hbase-18175-master-v3.patch, hbase-18175-master-v4.patch, 
> hbase-18175-master-v5.patch, hbase-18175-master-v6.patch, 
> hbase-18175-master-v7.patch, hbase-18175-master-v8.patch, 
> hbase-18175-master-v9.patch, hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and 
> this jira will add integration test of hbase-spark into hbase-it.  This patch 
> run same tests as mapreduce.IntegrationTestBulkLoad, just change mapreduce to 
> spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class 
> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
> HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar 
> -Dhbase.spark.bulkload.chainlength=50 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086225#comment-16086225
 ] 

Sean Busbey commented on HBASE-17678:
-

This issue is not listed as a blocker, so no. According to the current jira 
status, it's been present since 1.2.1 so it wouldn't be a problem new to a 
potential additional 1.2.z release either.

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.v1.patch, HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, 
> HBASE-17678.v3.patch, HBASE-17678.v4.patch, HBASE-17678.v4.patch, 
> HBASE-17678.v5.patch, HBASE-17678.v6.patch, HBASE-17678.v7.patch, 
> HBASE-17678.v7.patch, TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> 

[jira] [Updated] (HBASE-18371) [C++] Update folly and wangle dependencies

2017-07-13 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-18371:
--
Attachment: hbase-18371_v2.patch

v2 patch works and passes the unit tests for me. 

> [C++] Update folly and wangle dependencies
> --
>
> Key: HBASE-18371
> URL: https://issues.apache.org/jira/browse/HBASE-18371
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: HBASE-14850
>
> Attachments: hbase-18371_v1.patch, hbase-18371_v2.patch
>
>
> We need to update folly and wangle dependency versions. Debugging an issue, I 
> realized that we may need a couple of recent patches from wangle. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17922:
-
Fix Version/s: 3.0.0

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-17922:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086199#comment-16086199
 ] 

Appy commented on HBASE-17922:
--

Pushed to master and branch-2.
Thanks [~mdrob].

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18370) Master should attempt reassignment of regions in FAILED_OPEN state

2017-07-13 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086182#comment-16086182
 ] 

Gary Helmling commented on HBASE-18370:
---

One of the problems we have with the region assignment retries in 1.3 and prior 
is the lack of backoff between retry attempts, so we burn through the retries 
quickly.  With HBASE-16209 in branch-1+, we now have a backoff policy for 
region open attempts.  If we just change the default configuration for max 
retries to Integer.MAX_VALUE, this should effectively give us "retry forever" 
for region open, which seems much better than the current behavior.

So I'm not sure we need anything more than a config change.  Are there any 
places where this will not be sufficient?  I'm not sure offhand if we would 
give up on master failover?

> Master should attempt reassignment of regions in FAILED_OPEN state
> --
>
> Key: HBASE-18370
> URL: https://issues.apache.org/jira/browse/HBASE-18370
> Project: HBase
>  Issue Type: Improvement
>Reporter: Andrew Purtell
>
> Currently once a region goes into FAILED_OPEN state this requires operator 
> intervention. With some underlying causes, this is necessary. With others, 
> the master could eventually successfully deploy the region without humans in 
> the loop. The master should optionally attempt automatic resolution of 
> FAILED_OPEN states with a strategy of: delay, unassign, reassign. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-13 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-18375:
---
Affects Version/s: 2.0.0-alpha-1

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-13 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-18375:
---
Fix Version/s: 2.0.0-alpha-2
   3.0.0
   2.0.0

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-alpha-1
>Reporter: Anastasia Braginsky
>Priority: Critical
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18375-V01.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18375) The pool chunks from ChunkCreator are deallocated while in pool because there is no reference to them

2017-07-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086163#comment-16086163
 ] 

Anoop Sam John commented on HBASE-18375:


+1 to fix there as well.

> The pool chunks from ChunkCreator are deallocated while in pool because there 
> is no reference to them
> -
>
> Key: HBASE-18375
> URL: https://issues.apache.org/jira/browse/HBASE-18375
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Priority: Critical
> Attachments: HBASE-18375-V01.patch
>
>
> Because MSLAB list of chunks was changed to list of chunk IDs, the chunks 
> returned back to pool can be deallocated by JVM because there is no reference 
> to them. The solution is to protect pool chunks from GC by the strong map of 
> ChunkCreator introduced by HBASE-18010. Will prepare the patch today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086159#comment-16086159
 ] 

Mike Drob commented on HBASE-18377:
---

[~tedyu] - is this something that makes sense to add a test for?

> Error handling for FileNotFoundException should consider RemoteException in 
> ReplicationSource#openReader()
> --
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: 18377.branch-1.3.txt, 18377.v1.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17919) HBase 2.x over hadoop 3.x umbrella

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086143#comment-16086143
 ] 

Mike Drob edited comment on HBASE-17919 at 7/13/17 6:18 PM:


Possibly relevant for a release note, or a casual warning to future test 
debuggers.

DFSMiniClluster changed some behavior going into hadoop3 where it will clear 
all of it's own shutdown hooks. If a Mini DFS cluster is shut down in the same 
JVM before any RegionServers/HMasters have started then the RS will be unable 
to suppress the shutdown hooks and will fail during startup (because those 
hooks no longer exist). This was discovered while investigating HBASE-17922


was (Author: mdrob):
DFSMiniClluster changed some behavior going into hadoop3 where it will clear 
all of it's own shutdown hooks. If a Mini DFS cluster is shut down in the same 
JVM before any RegionServers/HMasters have started then the RS will be unable 
to suppress the shutdown hooks and will fail during startup (because those 
hooks no longer exist). This was discovered while investigating HBASE-17922

> HBase 2.x over hadoop 3.x  umbrella
> ---
>
> Key: HBASE-17919
> URL: https://issues.apache.org/jira/browse/HBASE-17919
> Project: HBase
>  Issue Type: Umbrella
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>
> We should try to get hbase 2.x branch working against the recently release 
> hadoop 3.0.0 alphas.  These days 3.0.0-alpha2 is the latest.
> HBASE-16733 and HBASE-17593 got the compile level checks in but we should 
> progress to getting unit tests to pass and a build against hadoop3 up.
> This umbrella issue will capture issues around this project.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17919) HBase 2.x over hadoop 3.x umbrella

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086143#comment-16086143
 ] 

Mike Drob commented on HBASE-17919:
---

DFSMiniClluster changed some behavior going into hadoop3 where it will clear 
all of it's own shutdown hooks. If a Mini DFS cluster is shut down in the same 
JVM before any RegionServers/HMasters have started then the RS will be unable 
to suppress the shutdown hooks and will fail during startup (because those 
hooks no longer exist). This was discovered while investigating HBASE-17922

> HBase 2.x over hadoop 3.x  umbrella
> ---
>
> Key: HBASE-17919
> URL: https://issues.apache.org/jira/browse/HBASE-17919
> Project: HBase
>  Issue Type: Umbrella
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>
> We should try to get hbase 2.x branch working against the recently release 
> hadoop 3.0.0 alphas.  These days 3.0.0-alpha2 is the latest.
> HBASE-16733 and HBASE-17593 got the compile level checks in but we should 
> progress to getting unit tests to pass and a build against hadoop3 up.
> This umbrella issue will capture issues around this project.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086138#comment-16086138
 ] 

Mike Drob commented on HBASE-17922:
---

Yep, I've verified it manually. Check my first comment for specific commands on 
how to set hadoop version.

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086123#comment-16086123
 ] 

Hadoop QA commented on HBASE-18377:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.4.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} branch-1.3 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hbase-server in branch-1.3 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 53s{color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 81m  
1s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:9ba21e3 |
| JIRA Issue | HBASE-18377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877105/18377.branch-1.3.txt |
| Optional Tests |  

[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086113#comment-16086113
 ] 

Appy commented on HBASE-17922:
--

I was just about to commit when i remembered to check default hadoop version in 
our pom. We are actually building against hadoop 2, which means that QA build 
above didn't verify your patch against hadoop3-alpha4. If you have verified it 
manually [~mdrob], let me know, and i'll commit it.

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086103#comment-16086103
 ] 

Appy commented on HBASE-17922:
--

Very thorough and interesting find [~mdrob]. Thanks for digging in.
Let me commit this patch since limiting it to HRegionServer makes sense based 
on what's be tested here.
---
So the issue you found above can surface in any test class with multiple test 
functions and setting up miniDFSCluster in @Before? (since all test fns run in 
same jvm)
Either way, I'd suggest that you post a small note on parent jira HBASE-17919 
about your discovery if it resurfaces in other test so someone else doesn't 
figure it out all over again.

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086067#comment-16086067
 ] 

Ted Yu commented on HBASE-18368:


I tried TestFilterWithScanLimits#testFiltersWithOr in master branch which 
passed without any change to Filter code.

Please use Peter's test case in future patches.

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, HBASE-18368.branch-1.v2.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086047#comment-16086047
 ] 

Appy commented on HBASE-17922:
--

Looking at it

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18368) Filters with OR do not work

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085812#comment-16085812
 ] 

Ted Yu edited comment on HBASE-18368 at 7/13/17 5:20 PM:
-

{code}
  public static boolean matchingRowColumn(final Cell left, final Cell right) {
if ((left.getRowLength() + left.getFamilyLength() + 
left.getQualifierLength()) != (right
.getRowLength() + right.getFamilyLength() + 
right.getQualifierLength())) {
  return false;
{code}
Family check is in the matchingColumn() method.


was (Author: yuzhih...@gmail.com):
{code}
  public static boolean matchingRowColumn(final Cell left, final Cell right) {
if ((left.getRowLength() + left.getFamilyLength() + 
left.getQualifierLength()) != (right
.getRowLength() + right.getFamilyLength() + 
right.getQualifierLength())) {
  return false;
{code}
Looks like getFamilyLength() should be taken out of the above check since 
family is not compared in the method.

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, HBASE-18368.branch-1.v2.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-13 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086025#comment-16086025
 ] 

Anoop Sam John commented on HBASE-18368:


{code}
public static boolean matchingRowColumn(final Cell left, final Cell right) {
if ((left.getRowLength() + left.getFamilyLength() + 
left.getQualifierLength()) != (right
.getRowLength() + right.getFamilyLength() + 
right.getQualifierLength())) {
  return false;
}

if (!matchingRows(left, right)) {
  return false;
}
return matchingColumn(left, right);
  }
public static boolean matchingColumn(final Cell left, final Cell right) {
if (!matchingFamily(left, right))
  return false;
return matchingQualifier(left, right);
  }
{code}

This is what we have. This is 2.0 code base. What is missing?

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, HBASE-18368.branch-1.v2.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-07-13 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085999#comment-16085999
 ] 

Chia-Ping Tsai commented on HBASE-17678:


bq. Right now the next branch-1.2 release is only blocked on proper nightly 
runs.
This issue doesn't block the release ?

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.v1.patch, HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, 
> HBASE-17678.v3.patch, HBASE-17678.v4.patch, HBASE-17678.v4.patch, 
> HBASE-17678.v5.patch, HBASE-17678.v6.patch, HBASE-17678.v7.patch, 
> HBASE-17678.v7.patch, TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 

[jira] [Commented] (HBASE-18365) Eliminate the findbugs warnings for hbase-common

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085989#comment-16085989
 ] 

Hudson commented on HBASE-18365:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3368 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3368/])
HBASE-18365 Eliminate the findbugs warnings for hbase-common (chia7712: rev 
cf636e50b9d2afbf0d017f2463b510ec10653a1a)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java


> Eliminate the findbugs warnings for hbase-common
> 
>
> Key: HBASE-18365
> URL: https://issues.apache.org/jira/browse/HBASE-18365
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18365.branch-1.2.v0.patch, 
> HBASE-18365.branch-1.3.v0.patch, HBASE-18365.branch-1.v0.patch, 
> HBASE-18465.v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18344) Introduce Append.addColumn as a replacement for Append.add

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085990#comment-16085990
 ] 

Hudson commented on HBASE-18344:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3368 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3368/])
HBASE-18344 Introduce Append.addColumn as a replacement for Append.add 
(chia7712: rev c0725ddff11992931fa6e2e5c454177df60da585)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabels.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResultFromCoprocessor.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestSpaceQuotas.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* (edit) 
hbase-endpoint/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutDeleteEtcCellIteration.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerReadRequestMetrics.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestHTableWrapper.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableBatch.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaCache.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityWithCheckAuths.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableNoncedRetry.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


> Introduce Append.addColumn as a replacement for Append.add
> --
>
> Key: HBASE-18344
> URL: https://issues.apache.org/jira/browse/HBASE-18344
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Jan Hentschel
>Priority: Trivial
>  Labels: beginner
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18344.master.001.patch, 
> HBASE-18344.master.002.patch
>
>
> We have Put#addColumn and Increment#addColumn but there is no 
> Append#addColumn. We should add Append#addColumn for consistency.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18377:
---
Attachment: 18377.v1.txt

> Error handling for FileNotFoundException should consider RemoteException in 
> ReplicationSource#openReader()
> --
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: 18377.branch-1.3.txt, 18377.v1.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085966#comment-16085966
 ] 

Mike Drob commented on HBASE-18175:
---

+1.

QA Test failure is unrelated and known to be flaky.

Please file a follow on JIRA to combine the configuration options for failsafe 
plugin between hbase-it and hbase-spark-it (possibly in parent, not sure).

This is targeted for master and branch-2? We're figuring out the plan for 
branch-1 later, right?

> Add hbase-spark integration test into hbase-spark-it
> 
>
> Key: HBASE-18175
> URL: https://issues.apache.org/jira/browse/HBASE-18175
> Project: HBase
>  Issue Type: Test
>  Components: spark
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-18175-master-v2.patch, 
> hbase-18175-master-v3.patch, hbase-18175-master-v4.patch, 
> hbase-18175-master-v5.patch, hbase-18175-master-v6.patch, 
> hbase-18175-master-v7.patch, hbase-18175-master-v8.patch, 
> hbase-18175-master-v9.patch, hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and 
> this jira will add integration test of hbase-spark into hbase-it.  This patch 
> run same tests as mapreduce.IntegrationTestBulkLoad, just change mapreduce to 
> spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class 
> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
> HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar 
> -Dhbase.spark.bulkload.chainlength=50 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085942#comment-16085942
 ] 

Sean Busbey commented on HBASE-17678:
-

If we need HBASE-18368 for correctness, then I'd rather wait. Right now the 
next branch-1.2 release is only blocked on proper nightly runs.

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.v1.patch, HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, 
> HBASE-17678.v3.patch, HBASE-17678.v4.patch, HBASE-17678.v4.patch, 
> HBASE-17678.v5.patch, HBASE-17678.v6.patch, HBASE-17678.v7.patch, 
> HBASE-17678.v7.patch, TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected 

[jira] [Updated] (HBASE-18202) Trim down supplemental models file for unnecessary entries

2017-07-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-18202:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Trim down supplemental models file for unnecessary entries
> --
>
> Key: HBASE-18202
> URL: https://issues.apache.org/jira/browse/HBASE-18202
> Project: HBase
>  Issue Type: Task
>  Components: dependencies
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0
>
> Attachments: HBASE-18202.patch, HBASE-18202.v2.patch
>
>
> With the more permissive "Apache License" check in HBASE-18033, we can remove 
> many entries from the supplemental-models.xml file. This issue is to track 
> that work separately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18344) Introduce Append.addColumn as a replacement for Append.add

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085935#comment-16085935
 ] 

Hudson commented on HBASE-18344:


FAILURE: Integrated in Jenkins build HBase-2.0 #166 (See 
[https://builds.apache.org/job/HBase-2.0/166/])
HBASE-18344 Introduce Append.addColumn as a replacement for Append.add 
(chia7712: rev 97b649eb93a57f021357b3a3b66500e2e91b3338)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaCache.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResultFromCoprocessor.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableBatch.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java
* (edit) 
hbase-endpoint/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerReadRequestMetrics.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityWithCheckAuths.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTable.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestHTableWrapper.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabels.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestSpaceQuotas.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableNoncedRetry.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutDeleteEtcCellIteration.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java


> Introduce Append.addColumn as a replacement for Append.add
> --
>
> Key: HBASE-18344
> URL: https://issues.apache.org/jira/browse/HBASE-18344
> Project: HBase
>  Issue Type: Task
>Reporter: Chia-Ping Tsai
>Assignee: Jan Hentschel
>Priority: Trivial
>  Labels: beginner
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18344.master.001.patch, 
> HBASE-18344.master.002.patch
>
>
> We have Put#addColumn and Increment#addColumn but there is no 
> Append#addColumn. We should add Append#addColumn for consistency.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18365) Eliminate the findbugs warnings for hbase-common

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085934#comment-16085934
 ] 

Hudson commented on HBASE-18365:


FAILURE: Integrated in Jenkins build HBase-2.0 #166 (See 
[https://builds.apache.org/job/HBase-2.0/166/])
HBASE-18365 Eliminate the findbugs warnings for hbase-common (chia7712: rev 
9daac09f6e606f7a26f68189bf90c47f28578a0e)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java


> Eliminate the findbugs warnings for hbase-common
> 
>
> Key: HBASE-18365
> URL: https://issues.apache.org/jira/browse/HBASE-18365
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18365.branch-1.2.v0.patch, 
> HBASE-18365.branch-1.3.v0.patch, HBASE-18365.branch-1.v0.patch, 
> HBASE-18465.v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18303) Clean up some parameterized test declarations

2017-07-13 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18303:
--
Status: Patch Available  (was: Open)

> Clean up some parameterized test declarations
> -
>
> Key: HBASE-18303
> URL: https://issues.apache.org/jira/browse/HBASE-18303
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18303.patch, HBASE-18303.patch
>
>
> While debugging something unrelated, I noticed that we use the constructor 
> form of junit parameterized tests, instead of the annotated members form.
> I personally find using the @Parameter annotation more clear.
> Also, we can move the parameter generator to hbase-common so that it is 
> accessible in more modules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18303) Clean up some parameterized test declarations

2017-07-13 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18303:
--
Status: Open  (was: Patch Available)

> Clean up some parameterized test declarations
> -
>
> Key: HBASE-18303
> URL: https://issues.apache.org/jira/browse/HBASE-18303
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18303.patch, HBASE-18303.patch
>
>
> While debugging something unrelated, I noticed that we use the constructor 
> form of junit parameterized tests, instead of the annotated members form.
> I personally find using the @Parameter annotation more clear.
> Also, we can move the parameter generator to hbase-common so that it is 
> accessible in more modules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18303) Clean up some parameterized test declarations

2017-07-13 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18303:
--
Attachment: HBASE-18303.patch

Attaching same patch to try for QA again.

> Clean up some parameterized test declarations
> -
>
> Key: HBASE-18303
> URL: https://issues.apache.org/jira/browse/HBASE-18303
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18303.patch, HBASE-18303.patch
>
>
> While debugging something unrelated, I noticed that we use the constructor 
> form of junit parameterized tests, instead of the annotated members form.
> I personally find using the @Parameter annotation more clear.
> Also, we can move the parameter generator to hbase-common so that it is 
> accessible in more modules.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085908#comment-16085908
 ] 

Ted Yu commented on HBASE-18377:


Patch for branch-1.3 refactors handling for FNFE. For FNFE wrapped in 
RemoteException, the same handling is applied.

> Error handling for FileNotFoundException should consider RemoteException in 
> ReplicationSource#openReader()
> --
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: 18377.branch-1.3.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18377:
---
Status: Patch Available  (was: Open)

> Error handling for FileNotFoundException should consider RemoteException in 
> ReplicationSource#openReader()
> --
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: 18377.branch-1.3.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18377:
---
Attachment: 18377.branch-1.3.txt

> Error handling for FileNotFoundException should consider RemoteException in 
> ReplicationSource#openReader()
> --
>
> Key: HBASE-18377
> URL: https://issues.apache.org/jira/browse/HBASE-18377
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: 18377.branch-1.3.txt
>
>
> In region server log, I observed the following:
> {code}
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> does not exist: 
> /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
> default.1497302873178
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
> ...
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
>   at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
>   at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
> {code}
> We have code in ReplicationSource#openReader() which is supposed to handle 
> FileNotFoundException but RemoteException wrapping FileNotFoundException was 
> missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18377) Error handling for FileNotFoundException should consider RemoteException in ReplicationSource#openReader()

2017-07-13 Thread Ted Yu (JIRA)
Ted Yu created HBASE-18377:
--

 Summary: Error handling for FileNotFoundException should consider 
RemoteException in ReplicationSource#openReader()
 Key: HBASE-18377
 URL: https://issues.apache.org/jira/browse/HBASE-18377
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


In region server log, I observed the following:
{code}
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does 
not exist: /apps/hbase/data/WALs/lx.p.com,16020,1497300923131/497300923131. 
default.1497302873178
  at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
  at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1860)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
...
  at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:326)
  at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:162)
  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:782)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:293)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:267)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:255)
  at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:414)
  at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:69)
  at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:605)
  at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:364)
{code}
We have code in ReplicationSource#openReader() which is supposed to handle 
FileNotFoundException but RemoteException wrapping FileNotFoundException was 
missed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-11249) Missing null check in finally block of HRegion#processRowsWithLocks() may lead to partially rolled back state in memstore.

2017-07-13 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved HBASE-11249.
---
Resolution: Not A Problem

This code was rewritten in HBASE-15158, doesn't look like a problem anymore.

> Missing null check in finally block of HRegion#processRowsWithLocks() may 
> lead to partially rolled back state in memstore.
> --
>
> Key: HBASE-11249
> URL: https://issues.apache.org/jira/browse/HBASE-11249
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> At line 4883:
> {code}
> Store store = getStore(kv);
> if (store == null) {
>   checkFamily(CellUtil.cloneFamily(kv));
>   // unreachable
> }
> {code}
> Exception would be thrown from checkFamily() if store is null.
> In the finally block:
> {code}
>   } finally {
> if (!mutations.isEmpty() && !walSyncSuccessful) {
>   LOG.warn("Wal sync failed. Roll back " + mutations.size() +
>   " memstore keyvalues for row(s):" + StringUtils.byteToHexString(
>   processor.getRowsToLock().iterator().next()) + "...");
>   for (KeyValue kv : mutations) {
> getStore(kv).rollback(kv);
>   }
> {code}
> There is no corresponding null check for return value of getStore() above, 
> potentially leading to partially rolled back state in memstore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18365) Eliminate the findbugs warnings for hbase-common

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085866#comment-16085866
 ] 

Hudson commented on HBASE-18365:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #198 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/198/])
HBASE-18365 Eliminate the findbugs warnings for hbase-common (chia7712: rev 
b1e128fd76299976a1c73e8ada687b8cf48d49db)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java


> Eliminate the findbugs warnings for hbase-common
> 
>
> Key: HBASE-18365
> URL: https://issues.apache.org/jira/browse/HBASE-18365
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18365.branch-1.2.v0.patch, 
> HBASE-18365.branch-1.3.v0.patch, HBASE-18365.branch-1.v0.patch, 
> HBASE-18465.v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085854#comment-16085854
 ] 

Sean Busbey commented on HBASE-18339:
-

Currently the closest we have to release notes are manually curated in the 
release announcement email. There's been periodic work on moving to use [Apache 
Yetus 
Releasedocmaker|http://yetus.apache.org/documentation/0.4.0/releasedocmaker/], 
but I don't think any RM has adopted it yet.

> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085851#comment-16085851
 ] 

Mike Drob commented on HBASE-18339:
---

Thanks for adding a release note. Are the release note fields all automatically 
collected for the release, or are they manually curated? I ask because if we 
upgrade hadoop dependencies again it could be confusing to have conflicting 
statements.

> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085844#comment-16085844
 ] 

Sean Busbey commented on HBASE-18339:
-

those failures would have been with the hadoop 2 profile, so not relevant to 
the change. (I agree it looks bad.)

> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-18339:

  Resolution: Fixed
Release Note: HBase now defaults to Apache Hadoop 3.0.0-alpha4 when the 
Hadoop 3 profile is active.
  Status: Resolved  (was: Patch Available)

pushed to branch-2 and master. Thanks!

> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085827#comment-16085827
 ] 

Mike Drob commented on HBASE-18339:
---

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test 
(secondPartTestsExecution) on project hbase-server: ExecutionException The 
forked VM terminated without properly saying goodbye. VM crash or System.exit 
called?
{noformat}

This looks bad, but I don't know why it happened.

> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18339) Update test-patch to use hadoop 3.0.0-alpha4

2017-07-13 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-18339:
--
Fix Version/s: 2.0.0-alpha-2
   3.0.0

> Update test-patch to use hadoop 3.0.0-alpha4
> 
>
> Key: HBASE-18339
> URL: https://issues.apache.org/jira/browse/HBASE-18339
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Mike Drob
>Assignee: Mike Drob
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18339.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18365) Eliminate the findbugs warnings for hbase-common

2017-07-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085814#comment-16085814
 ] 

Hudson commented on HBASE-18365:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #164 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/164/])
HBASE-18365 Eliminate the findbugs warnings for hbase-common (chia7712: rev 
4bd5f03d22be5f8ce09c1c67cfe5d1dcc603446a)
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/OrderedBytes.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java


> Eliminate the findbugs warnings for hbase-common
> 
>
> Key: HBASE-18365
> URL: https://issues.apache.org/jira/browse/HBASE-18365
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 3.0.0, 1.4.0, 1.3.2, 1.2.7, 2.0.0-alpha-2
>
> Attachments: HBASE-18365.branch-1.2.v0.patch, 
> HBASE-18365.branch-1.3.v0.patch, HBASE-18365.branch-1.v0.patch, 
> HBASE-18465.v0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18368) Filters with OR do not work

2017-07-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085812#comment-16085812
 ] 

Ted Yu commented on HBASE-18368:


{code}
  public static boolean matchingRowColumn(final Cell left, final Cell right) {
if ((left.getRowLength() + left.getFamilyLength() + 
left.getQualifierLength()) != (right
.getRowLength() + right.getFamilyLength() + 
right.getQualifierLength())) {
  return false;
{code}
Looks like getFamilyLength() should be taken out of the above check since 
family is not compared in the method.

> Filters with OR do not work
> ---
>
> Key: HBASE-18368
> URL: https://issues.apache.org/jira/browse/HBASE-18368
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Peter Somogyi
>Assignee: Allan Yang
>Priority: Critical
> Attachments: HBASE-18368.branch-1.patch, HBASE-18368.branch-1.v2.patch
>
>
> Scan gives back incomplete list if multiple filters are combined with OR / 
> MUST_PASS_ONE.
> Using 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will give 
> back results for only the first Filter.
> {code:java|title=Test code}
>   @Test
>   public void testFiltersWithOr() throws Exception {
> TableName tn = TableName.valueOf("MyTest");
> Table table = utility.createTable(tn, new String[] {"cf1", "cf2"});
> byte[] CF1 = Bytes.toBytes("cf1");
> byte[] CF2 = Bytes.toBytes("cf2");
> Put put1 = new Put(Bytes.toBytes("0"));
> put1.addColumn(CF1, Bytes.toBytes("col_a"), Bytes.toBytes(0));
> table.put(put1);
> Put put2 = new Put(Bytes.toBytes("0"));
> put2.addColumn(CF2, Bytes.toBytes("col_b"), Bytes.toBytes(0));
> table.put(put2);
> FamilyFilter filterCF1 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF1));
> FamilyFilter filterCF2 = new FamilyFilter(CompareFilter.CompareOp.EQUAL, 
> new BinaryComparator(CF2));
> FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ONE);
> filterList.addFilter(filterCF1);
> filterList.addFilter(filterCF2);
> Scan scan = new Scan();
> scan.setFilter(filterList);
> ResultScanner scanner = table.getScanner(scan);
> System.out.println(filterList);
> for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
>   System.out.println(rr);
> }
>   }
> {code}
> {noformat:title=Output}
> FilterList OR (2/2): [FamilyFilter (EQUAL, cf1), FamilyFilter (EQUAL, cf2)]
> keyvalues={0/cf1:col_a/1499852754957/Put/vlen=4/seqid=0}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17922) TestRegionServerHostname always fails against hadoop 3.0.0-alpha2

2017-07-13 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085787#comment-16085787
 ] 

Mike Drob commented on HBASE-17922:
---

Ping [~appy] - do you have time to push this to master and branch-2?

> TestRegionServerHostname always fails against hadoop 3.0.0-alpha2
> -
>
> Key: HBASE-17922
> URL: https://issues.apache.org/jira/browse/HBASE-17922
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3
>Affects Versions: 2.0.0
>Reporter: Jonathan Hsieh
>Assignee: Mike Drob
> Fix For: 2.0.0-alpha-2
>
> Attachments: HBASE-17922.patch
>
>
> {code}
> Running org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 126.363 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname
> testRegionServerHostname(org.apache.hadoop.hbase.regionserver.TestRegionServerHostname)
>   Time elapsed: 120.029 sec  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:221)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:405)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:225)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:94)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1123)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1077)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:948)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:942)
>   at 
> org.apache.hadoop.hbase.regionserver.TestRegionServerHostname.testRegionServerHostname(TestRegionServerHostname.java:88)
> Results :
> Tests in error: 
>   TestRegionServerHostname.testRegionServerHostname:88 ยป TestTimedOut test 
> timed...
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >