[jira] [Commented] (HBASE-18241) Change client.Table and client.Admin to not use HTableDescriptor

2017-06-24 Thread Biju Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062200#comment-16062200
 ] 

Biju Nair commented on HBASE-18241:
---

{quote}
Admin#getDescriptor(TableName tableName)
{quote}

{{getTableDescriptor}} will be preferable since the method name will be same as 
it is now so that users don't have to make change, also will reserve the method 
name {{getDescriptor}} for use with {{Admin}} exclusively in the future. 

> Change client.Table and client.Admin to not use HTableDescriptor
> 
>
> Key: HBASE-18241
> URL: https://issues.apache.org/jira/browse/HBASE-18241
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> {{HTableDescriptor}} is deprecated and scheduled to be removed in 3.0. But 
> [client.Table|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L69]
>  and 
> [client.Admin|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java#L198]
>  method {{getTableDescriptor}} returns {{HTableDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18241) Change client.Table and client.Admin to not use HTableDescriptor

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062172#comment-16062172
 ] 

Chia-Ping Tsai commented on HBASE-18241:


bq. On Region, getDescriptor would imply you'd get a RegionDescriptor... but 
that is not what is happening...
How about having RegionDescriptor ?
# RegionDescriptor is a read-only interface
# RegionDescriptor will extend the TableDescriptor
# RegionDescriptor will contain all of the read-only methods from HRegionInfo
# deprecate HRegionInfo to be removed
# RegionDescriptor is built by RegionDescriptorBuilder

Admin#deleteTables, Admin#enableTables, Admin#disableTables, and 
Admin#getTableDescriptorsByTableName also return the HTD.
# Admin#getTableDescriptorsByTableName  -> Admin#getTableDescriptors
# Admin#disableTables ->  It isn't a transactional API. We should encourage 
user to use Admin#listTables and Admin#disableTables. It should be deprecated 
and it doesn't need the new API.
# Admin#enableTables -> ditto 
# Admin#deleteTables -> ditto

> Change client.Table and client.Admin to not use HTableDescriptor
> 
>
> Key: HBASE-18241
> URL: https://issues.apache.org/jira/browse/HBASE-18241
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> {{HTableDescriptor}} is deprecated and scheduled to be removed in 3.0. But 
> [client.Table|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L69]
>  and 
> [client.Admin|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java#L198]
>  method {{getTableDescriptor}} returns {{HTableDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062169#comment-16062169
 ] 

Hudson commented on HBASE-18023:


SUCCESS: Integrated in Jenkins build HBase-2.0 #103 (See 
[https://builds.apache.org/job/HBase-2.0/103/])
HBASE-18023 Log multi-* requests for more than threshold number of rows 
(elserj: rev d9dd319390695618d01eda6a6b2943d535e8394b)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiLogThreshold.java
* (edit) hbase-common/src/main/resources/hbase-default.xml
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java


> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18023.master.001.patch, 
> HBASE-18023.master.002.patch, HBASE-18023.master.003.patch, 
> HBASE-18023.master.004.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062155#comment-16062155
 ] 

Hudson commented on HBASE-18023:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3255 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3255/])
HBASE-18023 Log multi-* requests for more than threshold number of rows 
(elserj: rev 0e8e176ebd3bd17d969d17ce2b0aa3dafb93fa22)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiLogThreshold.java
* (edit) hbase-common/src/main/resources/hbase-default.xml


> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18023.master.001.patch, 
> HBASE-18023.master.002.patch, HBASE-18023.master.003.patch, 
> HBASE-18023.master.004.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062150#comment-16062150
 ] 

Ted Yu commented on HBASE-17125:


{code}
+final WAL wal = 
HBaseTestingUtility.createWal(TEST_UTIL.getConfiguration(), logDir, info);
+this.region = TEST_UTIL.createLocalHRegion(info, htd, wal);
{code}
The above code relies on other test to initialize chunk creator.
If you run the subtest alone, you would observe NPE like the following:
{code}
MemStoreLABImpl.getOrMakeChunk() line: 242
MemStoreLABImpl.copyCellInto(Cell) line: 118
MutableSegment(Segment).maybeCloneWithAllocator(Cell) line: 168
CompactingMemStore(AbstractMemStore).maybeCloneWithAllocator(Cell) line: 268
CompactingMemStore(AbstractMemStore).add(Cell, MemstoreSize) line: 107
CompactingMemStore(AbstractMemStore).add(Iterable, MemstoreSize) line: 101
HStore.add(Iterable, MemstoreSize) line: 711
HRegion.applyToMemstore(Store, List, boolean, MemstoreSize) line: 4001
HRegion.applyFamilyMapToMemstore(Map, MemstoreSize) line: 
3984
HRegion.doMiniBatchMutate(BatchOperation) line: 3439
HRegion.batchMutate(BatchOperation) line: 3131
HRegion.batchMutate(Mutation[], long, long) line: 3073
HRegion.batchMutate(Mutation[]) line: 3077
HRegion.doBatchMutate(Mutation) line: 3827
HRegion.put(Put) line: 2950
TestHRegion.testGetWithFilter() line: 2665
{code}
The following would allow the subtest to run alone:
{code}
+ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 
0, null);
+this.region = TEST_UTIL.createLocalHRegion(info, htd, wal);
{code}

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are 

[jira] [Commented] (HBASE-18106) Redo ProcedureInfo and LockInfo

2017-06-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062144#comment-16062144
 ] 

stack commented on HBASE-18106:
---

(Copied from HBASE-18240 where we add protobuf-util which can JSON-ify pb)

The JsonFormatter in protobuf-util is basic but should do the job. Here is 
output:

kalashnikov:hbase.git stack$ ./bin/hbase 
org.apache.hadoop.hbase.procedure2.TestProcedureUtil
{ "className": 
"org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility$TestProcedure", 
"procId": "10", "submittedTime": "1498339510660", "state": "RUNNABLE", 
"lastUpdate": "1498339510660", "stateData": "AA==" }

adding this main on TestProcedureUtil:

public static void main(final String [] args) throws Exception
{ final TestProcedure proc1 = new TestProcedure(10); final 
ProcedureProtos.Procedure proto1 = 
ProcedureUtil.convertToProtoProcedure(proc1); JsonFormat.Printer printer = 
JsonFormat.printer().omittingInsignificantWhitespace(); 
System.out.println(printer.print(proto1)); }

For display in UI, could style the JSON and filter out state data.
For shell, could do simple one-lining (and purge state data... since it opaque. 
Later we might add Stringification..)

> Redo ProcedureInfo and LockInfo
> ---
>
> Key: HBASE-18106
> URL: https://issues.apache.org/jira/browse/HBASE-18106
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
> Fix For: 2.0.0
>
>
> ProcedureInfo was introduced as a lowest-common-denominator POJO that could 
> be used as a facade on PB Procedures. It was good for showing state of 
> Procedure framework in shell and UI.
> Its a bit weird though. Its up in hbase-common rather than in Procedure and 
> it can only ever show a subset of the Procedure info.
> I was thinking we could use the pb3.1 pb->JSON utility instead and emit a 
> JSON String wherever we need to export a view on procedure internals.
> This issue is about exploring this possibility. Would depend on our having an 
> upgraded guava (so probably depends on the 'pre-build' project).
> From ProcedureInfo and LockInfo need fixing in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.kid1jzo114xw



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18106) Redo ProcedureInfo and LockInfo

2017-06-24 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18106:
--
Priority: Critical  (was: Major)

> Redo ProcedureInfo and LockInfo
> ---
>
> Key: HBASE-18106
> URL: https://issues.apache.org/jira/browse/HBASE-18106
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> ProcedureInfo was introduced as a lowest-common-denominator POJO that could 
> be used as a facade on PB Procedures. It was good for showing state of 
> Procedure framework in shell and UI.
> Its a bit weird though. Its up in hbase-common rather than in Procedure and 
> it can only ever show a subset of the Procedure info.
> I was thinking we could use the pb3.1 pb->JSON utility instead and emit a 
> JSON String wherever we need to export a view on procedure internals.
> This issue is about exploring this possibility. Would depend on our having an 
> upgraded guava (so probably depends on the 'pre-build' project).
> From ProcedureInfo and LockInfo need fixing in 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.kid1jzo114xw



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18240) Add hbase-thirdparty, a project with hbase utility including an hbase-shaded-thirdparty module with guava, netty, etc.

2017-06-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062142#comment-16062142
 ] 

stack commented on HBASE-18240:
---

The JsonFormatter in protobuf-util is basic but should do the job. Here is 
output:

kalashnikov:hbase.git stack$ ./bin/hbase 
org.apache.hadoop.hbase.procedure2.TestProcedureUtil
{
  "className": 
"org.apache.hadoop.hbase.procedure2.ProcedureTestingUtility$TestProcedure",
  "procId": "10",
  "submittedTime": "1498339510660",
  "state": "RUNNABLE",
  "lastUpdate": "1498339510660",
  "stateData": "AA=="
}

adding this main on TestProcedureUtil:

  public static void main(final String [] args) throws Exception {
final TestProcedure proc1 = new TestProcedure(10);
final ProcedureProtos.Procedure proto1 = 
ProcedureUtil.convertToProtoProcedure(proc1);
JsonFormat.Printer printer = 
JsonFormat.printer().omittingInsignificantWhitespace();
System.out.println(printer.print(proto1));
  }

For display in UI, could style the JSON and filter out state data.

For shell, could do simple one-lining (and purge state data...  since it 
opaque. Later we might add Stringification..)

> Add hbase-thirdparty, a project with hbase utility including an 
> hbase-shaded-thirdparty module with guava, netty, etc.
> --
>
> Key: HBASE-18240
> URL: https://issues.apache.org/jira/browse/HBASE-18240
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-18240.master.001.patch, hbase-auxillary.tgz
>
>
> This issue is about adding a new related project to host hbase auxillary 
> utility. In this new project, the first thing we'd add is a module to host 
> shaded versions of third party libraries.
> This task comes of discussion held here 
> http://apache-hbase.679495.n3.nabble.com/DISCUSS-More-Shading-td4083025.html 
> where one conclusion of the DISCUSSION was "... pushing this part forward 
> with some code is the next logical step. Seems to be consensus about taking 
> our known internal dependencies and performing this shade magic."



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062138#comment-16062138
 ] 

Ted Yu commented on HBASE-17125:


{code}
+public class ColumnTrackerWrapper implements ColumnTracker {
{code}
There may be more wrapper for ColumnTracker in the future.
How about calling this wrapper ColumnTrackerWithVersions (or something like 
that) ?

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17908) Upgrade guava

2017-06-24 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17908:
--
Attachment: HBASE-17908.master.004.patch

> Upgrade guava
> -
>
> Key: HBASE-17908
> URL: https://issues.apache.org/jira/browse/HBASE-17908
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: Balazs Meszaros
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17908.master.001.patch, 
> HBASE-17908.master.002.patch, HBASE-17908.master.003.patch, 
> HBASE-17908.master.004.patch
>
>
> Currently we are using guava 12.0.1, but the latest version is 21.0. 
> Upgrading guava is always a hassle because it is not always backward 
> compatible with itself.
> Currently I think there are to approaches:
> 1. Upgrade guava to the newest version (21.0) and shade it.
> 2. Upgrade guava to a version which does not break or builds (15.0).
> If we can update it, some dependencies should be removed: 
> commons-collections, commons-codec, ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18234) Revisit the async admin api

2017-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062122#comment-16062122
 ] 

Hadoop QA commented on HBASE-18234:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 44s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
16s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 1s 
{color} | {color:red} hbase-client in master has 4 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 58s 
{color} | {color:red} hbase-client in master has 4 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 58s 
{color} | {color:red} hbase-server in master has 10 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 50s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 3m 36s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 47s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 47s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 178 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 12s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 12s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 15s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 14s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 14s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 15s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 14m 12s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 16m 11s 
{color} | {color:red} The patch causes 15 errors with Hadoop v2.7.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 18m 16s 
{color} | {color:red} The patch causes 15 errors with Hadoop v3.0.0-alpha3. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| 

[jira] [Created] (HBASE-18264) Update pom plugins

2017-06-24 Thread stack (JIRA)
stack created HBASE-18264:
-

 Summary: Update pom plugins
 Key: HBASE-18264
 URL: https://issues.apache.org/jira/browse/HBASE-18264
 Project: HBase
  Issue Type: Sub-task
Reporter: stack


A bunch are old. Lets update. [~balazs.meszaros] you want to have a go at this 
sir?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-24 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062108#comment-16062108
 ] 

Josh Elser commented on HBASE-18023:


[~dharju], actually, it doesn't come back cleanly to branch-1 (likely the 
shaded-protocol stuff in 2.0+). If you could put up a patch for branch-1, I'd 
be happy to commit it :)

> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18023.master.001.patch, 
> HBASE-18023.master.002.patch, HBASE-18023.master.003.patch, 
> HBASE-18023.master.004.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-24 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18023:
---
Fix Version/s: (was: 1.4.0)

> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18023.master.001.patch, 
> HBASE-18023.master.002.patch, HBASE-18023.master.003.patch, 
> HBASE-18023.master.004.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18241) Change client.Table and client.Admin to not use HTableDescriptor

2017-06-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062106#comment-16062106
 ] 

stack commented on HBASE-18241:
---

I think in Region it could be getTableDescriptor -- i.e. more specification. 
getDescriptor makes sense on Table because you are asking the Table instance 
for its descriptor ... so makes sense you'd get a TableDescriptor back. On 
Region, getDescriptor would imply you'd get a RegionDescriptor... but that is 
not what is happening... so getTableDescriptor makes sense to me on Region.  
For Admin, we could get away with getDescriptor since we are passing in a 
TableName as argument..  so we'd be getting the Descriptor associated w/ 
specific Table.

Admin#getDescriptor(TableName tableName)
Table#getDescrptor()
Region#getTableDescriptor()

What you lot think?

> Change client.Table and client.Admin to not use HTableDescriptor
> 
>
> Key: HBASE-18241
> URL: https://issues.apache.org/jira/browse/HBASE-18241
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> {{HTableDescriptor}} is deprecated and scheduled to be removed in 3.0. But 
> [client.Table|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L69]
>  and 
> [client.Admin|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java#L198]
>  method {{getTableDescriptor}} returns {{HTableDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-24 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18023:
---
Fix Version/s: 1.4.0
   3.0.0
   2.0.0

> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Fix For: 2.0.0, 3.0.0, 1.4.0
>
> Attachments: HBASE-18023.master.001.patch, 
> HBASE-18023.master.002.patch, HBASE-18023.master.003.patch, 
> HBASE-18023.master.004.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-24 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062101#comment-16062101
 ] 

Josh Elser commented on HBASE-18023:


LGTM, thanks for edits. Let me commit this to master, branch-2 and branch-1 for 
now.

We can pull it back to the other maintenance release lines if there's interest.

> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Fix For: 2.0.0, 3.0.0, 1.4.0
>
> Attachments: HBASE-18023.master.001.patch, 
> HBASE-18023.master.002.patch, HBASE-18023.master.003.patch, 
> HBASE-18023.master.004.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18240) Add hbase-thirdparty, a project with hbase utility including an hbase-shaded-thirdparty module with guava, netty, etc.

2017-06-24 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18240:
--
Release Note: 
Adds a new project, hbase-thirdparty, at 
https://git-wip-us.apache.org/repos/asf/hbase-thirdparty used by core hbase.

This project packages relocated third-party libraries used by Apache HBase such 
as protobuf, guava, and netty among others. HBase core depends on it.

It has two submodules, one to patch and then relocate (shade) protobuf. The 
other modules relocate a bundle of other (unpatched) libs used by hbase. This 
latter set includes protobuf-util, netty-all, gson, and guava.

All shading is done using the same relocation offset of 
org.apache.hadoop.hbase.shaded; we add this prefix to the relocated thirdparty 
library class names.

See the pom.xml for the explicit version of each third-party lib included (of 
note, we update out internal protobuf from 3.1.0 to 3.3.1).

Note that in hbase-shaded-protobuf, we unzip the protobuf jar to src/main/java
rather than to a dir under target because the jar plugin wants src here (its
hard to convince it otherwise).

> Add hbase-thirdparty, a project with hbase utility including an 
> hbase-shaded-thirdparty module with guava, netty, etc.
> --
>
> Key: HBASE-18240
> URL: https://issues.apache.org/jira/browse/HBASE-18240
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-18240.master.001.patch, hbase-auxillary.tgz
>
>
> This issue is about adding a new related project to host hbase auxillary 
> utility. In this new project, the first thing we'd add is a module to host 
> shaded versions of third party libraries.
> This task comes of discussion held here 
> http://apache-hbase.679495.n3.nabble.com/DISCUSS-More-Shading-td4083025.html 
> where one conclusion of the DISCUSSION was "... pushing this part forward 
> with some code is the next logical step. Seems to be consensus about taking 
> our known internal dependencies and performing this shade magic."



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18240) Add hbase-thirdparty, a project with hbase utility including an hbase-shaded-thirdparty module with guava, netty, etc.

2017-06-24 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062099#comment-16062099
 ] 

stack commented on HBASE-18240:
---

Pushed patch. Build now makes two jars, hbase-shaded-protobuf which is patched 
and shaded protobuf (3.3.1 -- an upgrade from our current pb3.1.0) and then 
another jar named hbase-shaded-protobuf which has other, non-patched, 
third-party libs: netty, guava, gson, and the protobuf-util lib.

Took a while but this is candidate for release. Let me play with it a while to 
make sure it good. Then can put up jar for the PMC to vote on (there is little 
in this module other than poms and patches -- all it does is relocation, 
patching, and repackaging).

TODO:

+ Change our shading offset in this project and in hbase from 
org.apache.hadoop.hbase.shaded to org.apache.hbase.shaded; i.e. undo the 
'hadoop'. Relocation to o.a.hbase will underline the fact that this stuff is 
'unusual' whether generated or relocated other-peoples libs.
+ Change hbase so we generate protobufs inline with build. We couldn't do this 
previous because of our protobuf patching but now this is done elsewhere, we 
can change mainline build. Simplifies the build.
+ Move all hbase guava use to depend on this shaded guava.
+ Move all netty, etc., use to depend on shaded artifact from this 
hbase-thirdparty lib.

Later, make use of the protobuf-util so we can json protobufs and use this 
wherever we have protobufinfo and lockinfo in our API.

> Add hbase-thirdparty, a project with hbase utility including an 
> hbase-shaded-thirdparty module with guava, netty, etc.
> --
>
> Key: HBASE-18240
> URL: https://issues.apache.org/jira/browse/HBASE-18240
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-18240.master.001.patch, hbase-auxillary.tgz
>
>
> This issue is about adding a new related project to host hbase auxillary 
> utility. In this new project, the first thing we'd add is a module to host 
> shaded versions of third party libraries.
> This task comes of discussion held here 
> http://apache-hbase.679495.n3.nabble.com/DISCUSS-More-Shading-td4083025.html 
> where one conclusion of the DISCUSSION was "... pushing this part forward 
> with some code is the next logical step. Seems to be consensus about taking 
> our known internal dependencies and performing this shade magic."



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18263) Resolve NPE in backup Master UI when access to procedures.jsp

2017-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062083#comment-16062083
 ] 

Hudson commented on HBASE-18263:


FAILURE: Integrated in Jenkins build HBase-2.0 #102 (See 
[https://builds.apache.org/job/HBase-2.0/102/])
HBASE-18263 Resolve NPE in backup Master UI when accessing (tedyu: rev 
a4c26526c06213ba3f891d86a9b30279449b8579)
* (edit) 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon
* (edit) hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp


> Resolve NPE in backup Master UI when access to procedures.jsp
> -
>
> Key: HBASE-18263
> URL: https://issues.apache.org/jira/browse/HBASE-18263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.2.0, 2.0.0-alpha-1
>Reporter: Shibin Zhang
>Assignee: Shibin Zhang
>Priority: Trivial
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18263.patch
>
>
> When accessing procedures.jsp ,the  NPE comes in backup master UI:
> HTTP ERROR 500
> Problem accessing /procedures.jsp. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:67)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> In server ,only the active master initialize procedureStore in HMaster.
>  so ,i think it will be better to remove procedures.jsp link in backup Master 
> UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062069#comment-16062069
 ] 

Hadoop QA commented on HBASE-17125:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s 
{color} | {color:red} hbase-client in master has 4 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 5m 54s 
{color} | {color:red} hbase-server in master has 10 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
58m 43s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 33s 
{color} | {color:red} hbase-client generated 5 new + 1 unchanged - 0 fixed = 6 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 41s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 54s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 134m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.locking.TestLockProcedure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12874373/HBASE-17125.master.017.patch
 |
| JIRA Issue | HBASE-17125 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 4c11bb685b75 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 96aca6b |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HBASE-18241) Change client.Table and client.Admin to not use HTableDescriptor

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062057#comment-16062057
 ] 

Chia-Ping Tsai commented on HBASE-18241:


Region#getTableDesc also needs an new API name. Any suggestion? 
Region#getDescriptor is fine IMO.

> Change client.Table and client.Admin to not use HTableDescriptor
> 
>
> Key: HBASE-18241
> URL: https://issues.apache.org/jira/browse/HBASE-18241
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> {{HTableDescriptor}} is deprecated and scheduled to be removed in 3.0. But 
> [client.Table|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L69]
>  and 
> [client.Admin|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java#L198]
>  method {{getTableDescriptor}} returns {{HTableDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18241) Change client.Table and client.Admin to not use HTableDescriptor

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16062054#comment-16062054
 ] 

Chia-Ping Tsai commented on HBASE-18241:


bq. We should add Table#getDescriptor and deprecate Table#getTableDescriptor... 
ditto for Admin?
+1

> Change client.Table and client.Admin to not use HTableDescriptor
> 
>
> Key: HBASE-18241
> URL: https://issues.apache.org/jira/browse/HBASE-18241
> Project: HBase
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>
> {{HTableDescriptor}} is deprecated and scheduled to be removed in 3.0. But 
> [client.Table|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L69]
>  and 
> [client.Admin|https://github.com/apache/hbase/blob/a66d491892514fd4a188d6ca87d6260d8ae46184/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java#L198]
>  method {{getTableDescriptor}} returns {{HTableDescriptor}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18234) Revisit the async admin api

2017-06-24 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-18234:
---
Attachment: HBASE-18234.master.009.patch

Attach a 009 patch to fix the failed ut.

> Revisit the async admin api
> ---
>
> Key: HBASE-18234
> URL: https://issues.apache.org/jira/browse/HBASE-18234
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18234.master.001.patch, 
> HBASE-18234.master.002.patch, HBASE-18234.master.003.patch, 
> HBASE-18234.master.004.patch, HBASE-18234.master.005.patch, 
> HBASE-18234.master.006.patch, HBASE-18234.master.006.patch, 
> HBASE-18234.master.006.patch, HBASE-18234.master.007.patch, 
> HBASE-18234.master.008.patch, HBASE-18234.master.009.patch
>
>
> 1. Update the balance method name. 
> balancer -> balance
> setBalancerRunning -> setBalancerOn
> isBalancerEnabled -> isBalancerOn
> 2. Use HRegionLocation instead of Pair
> 3. Remove the closeRegionWithEncodedRegionName method. Because all other api 
> can handle region name or encoded region name both. So don't need a method 
> for encoded name.
> 4. Unify the region name parameter's type to byte[]. And region name may be 
> full name or encoded name.
> 5. Unify the server name parameter's type to ServerName. For smoe api, it 
> support null for server name. So use Optional instead.
> 6. Unify the table name parameter's type to TableName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17125) Inconsistent result when use filter to read data

2017-06-24 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17125:
---
Attachment: HBASE-17125.master.017.patch

Attach 017 patch which fixed the failed ut.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 17125-slack-13.txt, example.diff, 
> HBASE-17125.master.001.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.003.patch, 
> HBASE-17125.master.004.patch, HBASE-17125.master.005.patch, 
> HBASE-17125.master.006.patch, HBASE-17125.master.007.patch, 
> HBASE-17125.master.008.patch, HBASE-17125.master.009.patch, 
> HBASE-17125.master.009.patch, HBASE-17125.master.010.patch, 
> HBASE-17125.master.011.patch, HBASE-17125.master.011.patch, 
> HBASE-17125.master.012.patch, HBASE-17125.master.013.patch, 
> HBASE-17125.master.014.patch, HBASE-17125.master.015.patch, 
> HBASE-17125.master.016.patch, HBASE-17125.master.017.patch, 
> HBASE-17125.master.checkReturnedVersions.patch, 
> HBASE-17125.master.no-specified-filter.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18263) Resolve NPE in backup Master UI when access to procedures.jsp

2017-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18263:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-alpha-2
   3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Shibin.

> Resolve NPE in backup Master UI when access to procedures.jsp
> -
>
> Key: HBASE-18263
> URL: https://issues.apache.org/jira/browse/HBASE-18263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.2.0, 2.0.0-alpha-1
>Reporter: Shibin Zhang
>Assignee: Shibin Zhang
>Priority: Trivial
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18263.patch
>
>
> When accessing procedures.jsp ,the  NPE comes in backup master UI:
> HTTP ERROR 500
> Problem accessing /procedures.jsp. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:67)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> In server ,only the active master initialize procedureStore in HMaster.
>  so ,i think it will be better to remove procedures.jsp link in backup Master 
> UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18263) Resolve NPE in backup Master UI when access to procedures.jsp

2017-06-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061995#comment-16061995
 ] 

Hudson commented on HBASE-18263:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3253 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3253/])
HBASE-18263 Resolve NPE in backup Master UI when accessing (tedyu: rev 
96aca6b15392e9bdc611eee7e3273f424730cbd7)
* (edit) hbase-server/src/main/resources/hbase-webapps/master/tablesDetailed.jsp
* (edit) 
hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/MasterStatusTmpl.jamon


> Resolve NPE in backup Master UI when access to procedures.jsp
> -
>
> Key: HBASE-18263
> URL: https://issues.apache.org/jira/browse/HBASE-18263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.2.0, 2.0.0-alpha-1
>Reporter: Shibin Zhang
>Assignee: Shibin Zhang
>Priority: Trivial
> Attachments: HBASE-18263.patch
>
>
> When accessing procedures.jsp ,the  NPE comes in backup master UI:
> HTTP ERROR 500
> Problem accessing /procedures.jsp. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:67)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> In server ,only the active master initialize procedureStore in HMaster.
>  so ,i think it will be better to remove procedures.jsp link in backup Master 
> UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18234) Revisit the async admin api

2017-06-24 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061973#comment-16061973
 ] 

Duo Zhang commented on HBASE-18234:
---

+1. But why so many failed UTs?

> Revisit the async admin api
> ---
>
> Key: HBASE-18234
> URL: https://issues.apache.org/jira/browse/HBASE-18234
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18234.master.001.patch, 
> HBASE-18234.master.002.patch, HBASE-18234.master.003.patch, 
> HBASE-18234.master.004.patch, HBASE-18234.master.005.patch, 
> HBASE-18234.master.006.patch, HBASE-18234.master.006.patch, 
> HBASE-18234.master.006.patch, HBASE-18234.master.007.patch, 
> HBASE-18234.master.008.patch
>
>
> 1. Update the balance method name. 
> balancer -> balance
> setBalancerRunning -> setBalancerOn
> isBalancerEnabled -> isBalancerOn
> 2. Use HRegionLocation instead of Pair
> 3. Remove the closeRegionWithEncodedRegionName method. Because all other api 
> can handle region name or encoded region name both. So don't need a method 
> for encoded name.
> 4. Unify the region name parameter's type to byte[]. And region name may be 
> full name or encoded name.
> 5. Unify the server name parameter's type to ServerName. For smoe api, it 
> support null for server name. So use Optional instead.
> 6. Unify the table name parameter's type to TableName.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061957#comment-16061957
 ] 

Chia-Ping Tsai commented on HBASE-18164:


bq. Do you mind pushing an addendum (to all related branches)?
I just followed the log message so I'm not sure whether the zero value of 
bestLocality is real reason. Additionally, It is wired that the 
TestRegionRebalancing passed in master. [~kahliloppenheimer] What do u think 
about the bestLocality?


> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch, HBASE-18164-04.patch, HBASE-18164-05.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18263) Resolve NPE in backup Master UI when access to procedures.jsp

2017-06-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-18263:
--

Assignee: Shibin Zhang

> Resolve NPE in backup Master UI when access to procedures.jsp
> -
>
> Key: HBASE-18263
> URL: https://issues.apache.org/jira/browse/HBASE-18263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.2.0, 2.0.0-alpha-1
>Reporter: Shibin Zhang
>Assignee: Shibin Zhang
>Priority: Trivial
> Attachments: HBASE-18263.patch
>
>
> When accessing procedures.jsp ,the  NPE comes in backup master UI:
> HTTP ERROR 500
> Problem accessing /procedures.jsp. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:67)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> In server ,only the active master initialize procedureStore in HMaster.
>  so ,i think it will be better to remove procedures.jsp link in backup Master 
> UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18263) Resolve NPE in backup Master UI when access to procedures.jsp

2017-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061948#comment-16061948
 ] 

Ted Yu commented on HBASE-18263:


It just means that the test aborted, resulting in missing test output.
Not related to your patch.

+1 from me.

> Resolve NPE in backup Master UI when access to procedures.jsp
> -
>
> Key: HBASE-18263
> URL: https://issues.apache.org/jira/browse/HBASE-18263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.2.0, 2.0.0-alpha-1
>Reporter: Shibin Zhang
>Priority: Trivial
> Attachments: HBASE-18263.patch
>
>
> When accessing procedures.jsp ,the  NPE comes in backup master UI:
> HTTP ERROR 500
> Problem accessing /procedures.jsp. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:67)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> In server ,only the active master initialize procedureStore in HMaster.
>  so ,i think it will be better to remove procedures.jsp link in backup Master 
> UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061945#comment-16061945
 ] 

Ted Yu commented on HBASE-18164:


Chia-Ping:
Do you mind pushing an addendum (to all related branches)?

Thanks

> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch, HBASE-18164-04.patch, HBASE-18164-05.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061917#comment-16061917
 ] 

Chia-Ping Tsai edited comment on HBASE-18164 at 6/24/17 12:08 PM:
--

I guess that the error is due to the NaN.
{noformat}
balancer.StochasticLoadBalancer(433): Could not find a better load balance 
plan.  Tried 43200 different configurations in 185ms, and did not find anything 
with a computed cost less than NaN
{noformat}
TestRegionRebalancing passes if I skip the normalization when the value of 
bestLocality is zero.
{code}
  // We normalize locality to be a score between 0 and 1.0 representing how 
good it
  // is compared to how good it could be
  locality /= bestLocality;
{code}
{code}
  double localityDelta = getWeightedLocality(region, newEntity) - 
getWeightedLocality(region, oldEntity);
  double normalizedDelta = localityDelta / bestLocality;
  locality += normalizedDelta;
{code}



was (Author: chia7712):
I guess that the error is due to the NaN.
{noformat}
balancer.StochasticLoadBalancer(433): Could not find a better load balance 
plan.  Tried 43200 different configurations in 185ms, and did not find anything 
with a computed cost less than NaN
{noformat}
I skip the normalization when the bestLocality is zero, and the 
TestRegionRebalancing pass.
{code}
  // We normalize locality to be a score between 0 and 1.0 representing how 
good it
  // is compared to how good it could be
  locality /= bestLocality;
{code}
{code}
  double localityDelta = getWeightedLocality(region, newEntity) - 
getWeightedLocality(region, oldEntity);
  double normalizedDelta = localityDelta / bestLocality;
  locality += normalizedDelta;
{code}


> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch, HBASE-18164-04.patch, HBASE-18164-05.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061917#comment-16061917
 ] 

Chia-Ping Tsai commented on HBASE-18164:


I guess that the error is due to the NaN.
{noformat}
balancer.StochasticLoadBalancer(433): Could not find a better load balance 
plan.  Tried 43200 different configurations in 185ms, and did not find anything 
with a computed cost less than NaN
{noformat}
I skip the normalization when the bestLocality is zero, and the 
TestRegionRebalancing pass.
{code}
  // We normalize locality to be a score between 0 and 1.0 representing how 
good it
  // is compared to how good it could be
  locality /= bestLocality;
{code}
{code}
  double localityDelta = getWeightedLocality(region, newEntity) - 
getWeightedLocality(region, oldEntity);
  double normalizedDelta = localityDelta / bestLocality;
  locality += normalizedDelta;
{code}


> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Fix For: 3.0.0, 1.4.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch, HBASE-18164-04.patch, HBASE-18164-05.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18263) Resolve NPE in backup Master UI when access to procedures.jsp

2017-06-24 Thread Shibin Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061865#comment-16061865
 ] 

Shibin Zhang edited comment on HBASE-18263 at 6/24/17 8:46 AM:
---

[~te...@apache.org] ,it may be  unrelated with my patch ,could you help me see 
why

https://builds.apache.org/job/PreCommit-HBASE-Build/7311/testReport/ 
Failed to read test report file 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationKillMasterRSCompressed.xml
org.dom4j.DocumentException: Error on line 167 of document 
file:///home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationKillMasterRSCompressed.xml
 : XML document structures must start and end within the same entity. Nested 
exception: XML document structures must start and end within the same entity.




was (Author: zhangshibin):
Ted Yu ,it may be  unrelated with my patch ,could you help me see why

https://builds.apache.org/job/PreCommit-HBASE-Build/7311/testReport/ 
Failed to read test report file 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationKillMasterRSCompressed.xml
org.dom4j.DocumentException: Error on line 167 of document 
file:///home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationKillMasterRSCompressed.xml
 : XML document structures must start and end within the same entity. Nested 
exception: XML document structures must start and end within the same entity.



> Resolve NPE in backup Master UI when access to procedures.jsp
> -
>
> Key: HBASE-18263
> URL: https://issues.apache.org/jira/browse/HBASE-18263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.2.0, 2.0.0-alpha-1
>Reporter: Shibin Zhang
>Priority: Trivial
> Attachments: HBASE-18263.patch
>
>
> When accessing procedures.jsp ,the  NPE comes in backup master UI:
> HTTP ERROR 500
> Problem accessing /procedures.jsp. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:67)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> In server ,only the active master initialize procedureStore in HMaster.
>  so ,i think it will be better to remove procedures.jsp link in backup Master 
> UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18263) Resolve NPE in backup Master UI when access to procedures.jsp

2017-06-24 Thread Shibin Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061865#comment-16061865
 ] 

Shibin Zhang commented on HBASE-18263:
--

Ted Yu ,it may be  unrelated with my patch ,could you help me see why

https://builds.apache.org/job/PreCommit-HBASE-Build/7311/testReport/ 
Failed to read test report file 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationKillMasterRSCompressed.xml
org.dom4j.DocumentException: Error on line 167 of document 
file:///home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/hbase-server/target/surefire-reports/TEST-org.apache.hadoop.hbase.replication.TestReplicationKillMasterRSCompressed.xml
 : XML document structures must start and end within the same entity. Nested 
exception: XML document structures must start and end within the same entity.



> Resolve NPE in backup Master UI when access to procedures.jsp
> -
>
> Key: HBASE-18263
> URL: https://issues.apache.org/jira/browse/HBASE-18263
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 1.2.0, 2.0.0-alpha-1
>Reporter: Shibin Zhang
>Priority: Trivial
> Attachments: HBASE-18263.patch
>
>
> When accessing procedures.jsp ,the  NPE comes in backup master UI:
> HTTP ERROR 500
> Problem accessing /procedures.jsp. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.generated.master.procedures_jsp._jspService(procedures_jsp.java:67)
>   at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1354)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> In server ,only the active master initialize procedureStore in HMaster.
>  so ,i think it will be better to remove procedures.jsp link in backup Master 
> UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18259) HBase book link to "beginner" issues includes resolved issues

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061858#comment-16061858
 ] 

Chia-Ping Tsai commented on HBASE-18259:


Test failures unrelated. +1

> HBase book link to "beginner" issues includes resolved issues
> -
>
> Key: HBASE-18259
> URL: https://issues.apache.org/jira/browse/HBASE-18259
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Mike Drob
>Assignee: Peter Somogyi
>  Labels: beginner
> Attachments: HBASE-18259.master.001.patch
>
>
> The link at http://hbase.apache.org/book.html#getting.involved for beginner 
> issues is 
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)
> but this includes resolved issues as well, which is not useful to folks 
> looking for new issues to cut their teeth on.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-12794) Guidelines for filing JIRA issues

2017-06-24 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061833#comment-16061833
 ] 

Chia-Ping Tsai commented on HBASE-12794:


{quote}
+** What happens or doesn't happen
+** How does it impact you
+** How can someone else reproduce it
+** What would "fixed" look like?
{quote}
Consider adding the question mark for each line.

> Guidelines for filing JIRA issues
> -
>
> Key: HBASE-12794
> URL: https://issues.apache.org/jira/browse/HBASE-12794
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 2.0.0-alpha-1
>Reporter: stack
>Assignee: Misty Stanley-Jones
> Fix For: 2.0.0
>
> Attachments: HBASE-12794.patch
>
>
> Following on from Andrew's JIRA year-end cleaning spree, lets get some 
> guidelines on filing issues e.g. fill out all pertinent fields, add context 
> and provenance, add value (i.e. triage), don't file issues that are nought 
> but repeat of info available elsewhere (build box or mailing list), be 
> reluctant filing issues that don't have a resource behind them, don't file 
> issues on behalf of others, don't split fixes across multiple issues (because 
> there are poor folks coming behind us trying to backport our mess and 
> piecemeal makes their jobs harder), and so on.
> The guidelines are not meant to put a chill on the opening of issues when 
> problems are found, especially not for new contributors. They are more meant 
> for quoting to veteran contributors who continue to file issues in violation 
> of what was thought a common understanding; rather than explain each time why 
> an issue has been marked invalid, it would be better if we can quote chapter 
> and verse from the refguide.
> Dump any suggestion in here and I'll wind them up as a patch that I'll run by 
> dev mailing list to get consensus before committing.
> Here is a running google doc if you'd like to add comment: 
> https://docs.google.com/document/d/1p3ArVLcnQnifk6ZsF635qWBhMmfTUJsISyK15DXnam0/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it

2017-06-24 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061824#comment-16061824
 ] 

Yi Liang commented on HBASE-18175:
--

Hi Mike and Sean
  In the new patch, I add hbase-spark-it as top level directly under hbase 
module, but I put the folder of hbase-spark-it under hbase-spark. 

> Add hbase-spark integration test into hbase-spark-it
> 
>
> Key: HBASE-18175
> URL: https://issues.apache.org/jira/browse/HBASE-18175
> Project: HBase
>  Issue Type: Test
>  Components: spark
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-18175-master-v2.patch, hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and 
> this jira will add integration test of hbase-spark into hbase-it.  This patch 
> run same tests as mapreduce.IntegrationTestBulkLoad, just change mapreduce to 
> spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class 
> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
> HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar 
> -Dhbase.spark.bulkload.chainlength=50 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it

2017-06-24 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-18175:
-
Attachment: hbase-18175-master-v2.patch

> Add hbase-spark integration test into hbase-spark-it
> 
>
> Key: HBASE-18175
> URL: https://issues.apache.org/jira/browse/HBASE-18175
> Project: HBase
>  Issue Type: Test
>  Components: spark
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-18175-master-v2.patch, hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and 
> this jira will add integration test of hbase-spark into hbase-it.  This patch 
> run same tests as mapreduce.IntegrationTestBulkLoad, just change mapreduce to 
> spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class 
> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
> HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar 
> -Dhbase.spark.bulkload.chainlength=50 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it

2017-06-24 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-18175:
-
Summary: Add hbase-spark integration test into hbase-spark-it  (was: Add 
hbase-spark integration test into hbase-it)

> Add hbase-spark integration test into hbase-spark-it
> 
>
> Key: HBASE-18175
> URL: https://issues.apache.org/jira/browse/HBASE-18175
> Project: HBase
>  Issue Type: Test
>  Components: spark
>Reporter: Yi Liang
>Assignee: Yi Liang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and 
> this jira will add integration test of hbase-spark into hbase-it.  This patch 
> run same tests as mapreduce.IntegrationTestBulkLoad, just change mapreduce to 
> spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class 
> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad 
> HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar 
> -Dhbase.spark.bulkload.chainlength=50 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18234) Revisit the async admin api

2017-06-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061820#comment-16061820
 ] 

Hadoop QA commented on HBASE-18234:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 50s 
{color} | {color:red} hbase-client in master has 4 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 49s 
{color} | {color:red} hbase-client in master has 4 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 37s 
{color} | {color:red} hbase-server in master has 10 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 45s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
42s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s 
{color} | {color:red} hbase-client generated 7 new + 1 unchanged - 1 fixed = 8 
total (was 2) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 24s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 113m 5s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 165m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestWarmupRegion |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
|   | hadoop.hbase.client.TestMobSnapshotCloneIndependence |
|   | hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
|   | hadoop.hbase.regionserver.TestScannerWithBulkload |
|   | hadoop.hbase.regionserver.TestSplitTransactionOnCluster |
|   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
|   | hadoop.hbase.regionserver.compactions.TestFIFOCompactionPolicy |
|   | hadoop.hbase.coprocessor.TestRegionObserverInterface |
|   | hadoop.hbase.master.TestAssignmentListener |
|   | hadoop.hbase.regionserver.TestTags |
|   |