[jira] [Commented] (HBASE-20886) [Auth] Support keytab login in hbase client

2018-07-14 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16544344#comment-16544344
 ] 

Sean Busbey commented on HBASE-20886:
-

I thought we did this via AuthUtil and documented it in the ref guide?

> [Auth] Support keytab login in hbase client
> ---
>
> Key: HBASE-20886
> URL: https://issues.apache.org/jira/browse/HBASE-20886
> Project: HBase
>  Issue Type: Improvement
>  Components: asyncclient, Client, security
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Critical
> Attachments: HBASE-20886.master.001.patch
>
>
> There're lots of questions about how to connect to kerberized hbase cluster 
> through hbase-client api from user-mail and slack channel.
> {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are 
> already existed in code base, but they are only used in {{Canary}}.
> This issue is to make use of two configs to support client-side keytab based 
> login, after this issue resolved, hbase-client should directly connect to 
> kerberized cluster without changing any code as long as 
> {{hbase.client.keytab.file}} and {{hbase.client.keytab.principal}} are 
> specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-07-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543987#comment-16543987
 ] 

Sean Busbey commented on HBASE-20649:
-

I've got this ready to go locally. FYI [~balazs.meszaros] I've got this staged 
with [~psomogyi] as author and you as amending-author.

[~zyork] / [~Apache9] let me know if y'all would like to be listed as 
signed-off-by on this in addition to me. I'm not sure if your above supportive 
statements should be taken as specific reviews.

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, 
> HBASE-20649.master.004.patch, HBASE-20649.master.005.patch, 
> HBASE-20649.master.006.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-07-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543356#comment-16543356
 ] 

Sean Busbey commented on HBASE-20649:
-

All the failed tests are timeouts.  I'll try rerunning precommit since it's not 
clear to me how this patchset could impact those jobs.

the docs change in v6 works well enough for me. If anyone else would like to 
see more please give a shout.

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, 
> HBASE-20649.master.004.patch, HBASE-20649.master.005.patch, 
> HBASE-20649.master.006.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20305) Add option to SyncTable that skip deletes on target cluster

2018-07-12 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20305:

Fix Version/s: 2.2.0
   1.5.0

> Add option to SyncTable that skip deletes on target cluster
> ---
>
> Key: HBASE-20305
> URL: https://issues.apache.org/jira/browse/HBASE-20305
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0-alpha-4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: 0001-HBASE-20305.master.001.patch, 
> HBASE-20305.master.002.patch
>
>
> We had a situation where two clusters with active-active replication got out 
> of sync, but both had data that should be kept. The tables in question never 
> have data deleted, but ingestion had happened on the two different clusters, 
> some rows had been even updated.
> In this scenario, a cell that is present in one of the table clusters should 
> not be deleted, but replayed on the other. Also, for cells with same 
> identifier but different values, the most recent value should be kept. 
> Current version of SyncTable would not be applicable here, because it would 
> simply copy the whole state from source to target, then losing any additional 
> rows that might be only in target, as well as cell values that got most 
> recent update. This could be solved by adding an option to skip deletes for 
> SyncTable. This way, the additional cells not present on source would still 
> be kept. For cells with same identifier but different values, it would just 
> perform a Put for the cell version from source, but client scans would still 
> fetch the most recent timestamp.
> I'm attaching a patch with this additional option shortly. Please share your 
> thoughts.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20305) Add option to SyncTable that skip deletes on target cluster

2018-07-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541892#comment-16541892
 ] 

Sean Busbey commented on HBASE-20305:
-

tentatively adding to scope of next minor releases. If I don't hear an 
objection I'll backport this later this week.

> Add option to SyncTable that skip deletes on target cluster
> ---
>
> Key: HBASE-20305
> URL: https://issues.apache.org/jira/browse/HBASE-20305
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0-alpha-4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: 0001-HBASE-20305.master.001.patch, 
> HBASE-20305.master.002.patch
>
>
> We had a situation where two clusters with active-active replication got out 
> of sync, but both had data that should be kept. The tables in question never 
> have data deleted, but ingestion had happened on the two different clusters, 
> some rows had been even updated.
> In this scenario, a cell that is present in one of the table clusters should 
> not be deleted, but replayed on the other. Also, for cells with same 
> identifier but different values, the most recent value should be kept. 
> Current version of SyncTable would not be applicable here, because it would 
> simply copy the whole state from source to target, then losing any additional 
> rows that might be only in target, as well as cell values that got most 
> recent update. This could be solved by adding an option to skip deletes for 
> SyncTable. This way, the additional cells not present on source would still 
> be kept. For cells with same identifier but different values, it would just 
> perform a Put for the cell version from source, but client scans would still 
> fetch the most recent timestamp.
> I'm attaching a patch with this additional option shortly. Please share your 
> thoughts.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-20305) Add option to SyncTable that skip deletes on target cluster

2018-07-12 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reopened HBASE-20305:
-

Anyone opposed to this getting pulled back into earlier release lines? It seems 
like a solid low risk addition.

> Add option to SyncTable that skip deletes on target cluster
> ---
>
> Key: HBASE-20305
> URL: https://issues.apache.org/jira/browse/HBASE-20305
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0-alpha-4
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: 0001-HBASE-20305.master.001.patch, 
> HBASE-20305.master.002.patch
>
>
> We had a situation where two clusters with active-active replication got out 
> of sync, but both had data that should be kept. The tables in question never 
> have data deleted, but ingestion had happened on the two different clusters, 
> some rows had been even updated.
> In this scenario, a cell that is present in one of the table clusters should 
> not be deleted, but replayed on the other. Also, for cells with same 
> identifier but different values, the most recent value should be kept. 
> Current version of SyncTable would not be applicable here, because it would 
> simply copy the whole state from source to target, then losing any additional 
> rows that might be only in target, as well as cell values that got most 
> recent update. This could be solved by adding an option to skip deletes for 
> SyncTable. This way, the additional cells not present on source would still 
> be kept. For cells with same identifier but different values, it would just 
> perform a Put for the cell version from source, but client scans would still 
> fetch the most recent timestamp.
> I'm attaching a patch with this additional option shortly. Please share your 
> thoughts.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters

2018-07-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541887#comment-16541887
 ] 

Sean Busbey commented on HBASE-20586:
-

I'm not sure it's possible to get a unit test going for this, especially if 
MiniKDC can't do it. If we do this as an integration test outside of maven that 
manually runs two different MiniKDCs could we do it?

> SyncTable tool: Add support for cross-realm remote clusters
> ---
>
> Key: HBASE-20586
> URL: https://issues.apache.org/jira/browse/HBASE-20586
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, Replication
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-20586.master.001.patch
>
>
> One possible scenario for HashTable/SyncTable is for synchronize different 
> clusters, for instance, when replication has been enabled but data existed 
> already, or due replication issues that may had caused long lags in the 
> replication.
> For secured clusters under different kerberos realms (with cross-realm 
> properly set), though, current SyncTable version would fail to authenticate 
> with the remote cluster when trying to read HashTable outputs (when 
> *sourcehashdir* is remote) and also when trying to read table data on the 
> remote cluster (when *sourcezkcluster* is remote).
> The hdfs error would look like this:
> {noformat}
> INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status 
> : FAILED
> Error: java.io.IOException: Failed on local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; 
> destination host is: "remote-nn":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1506)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1439)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>         at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256)
> ...
>         at 
> org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144)
>         at 
> org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105)
>         at 
> org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
> ...
> Caused by: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]{noformat}
> The above can be sorted if the SyncTable job acquires a DT for the remote NN. 
> Once hdfs related authentication is done, it's also necessary to authenticate 
> against remote HBase, as the below error would arise:
> {noformat}
> INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status 
> : FAILED
> Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get 
> the location
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
> ...
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)
> at 
> org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331)
> ...
> Caused by: java.io.IOException: Could not set up IO Streams to 
> remote-rs-host/1.1.1.2:60020
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786)
> ...
> Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'.
> ...
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
> ...{noformat}
> The above would need additional authentication logic against the remote hbase 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters

2018-07-12 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20586:

Affects Version/s: 1.2.0
   2.0.0

> SyncTable tool: Add support for cross-realm remote clusters
> ---
>
> Key: HBASE-20586
> URL: https://issues.apache.org/jira/browse/HBASE-20586
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, Replication
>Affects Versions: 1.2.0, 2.0.0
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-20586.master.001.patch
>
>
> One possible scenario for HashTable/SyncTable is for synchronize different 
> clusters, for instance, when replication has been enabled but data existed 
> already, or due replication issues that may had caused long lags in the 
> replication.
> For secured clusters under different kerberos realms (with cross-realm 
> properly set), though, current SyncTable version would fail to authenticate 
> with the remote cluster when trying to read HashTable outputs (when 
> *sourcehashdir* is remote) and also when trying to read table data on the 
> remote cluster (when *sourcezkcluster* is remote).
> The hdfs error would look like this:
> {noformat}
> INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status 
> : FAILED
> Error: java.io.IOException: Failed on local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; 
> destination host is: "remote-nn":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1506)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1439)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>         at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256)
> ...
>         at 
> org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144)
>         at 
> org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105)
>         at 
> org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
> ...
> Caused by: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]{noformat}
> The above can be sorted if the SyncTable job acquires a DT for the remote NN. 
> Once hdfs related authentication is done, it's also necessary to authenticate 
> against remote HBase, as the below error would arise:
> {noformat}
> INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status 
> : FAILED
> Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get 
> the location
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
> ...
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)
> at 
> org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331)
> ...
> Caused by: java.io.IOException: Could not set up IO Streams to 
> remote-rs-host/1.1.1.2:60020
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786)
> ...
> Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'.
> ...
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
> ...{noformat}
> The above would need additional authentication logic against the remote hbase 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20586) SyncTable tool: Add support for cross-realm remote clusters

2018-07-12 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20586:

Status: Patch Available  (was: Open)

> SyncTable tool: Add support for cross-realm remote clusters
> ---
>
> Key: HBASE-20586
> URL: https://issues.apache.org/jira/browse/HBASE-20586
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce, Replication
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-20586.master.001.patch
>
>
> One possible scenario for HashTable/SyncTable is for synchronize different 
> clusters, for instance, when replication has been enabled but data existed 
> already, or due replication issues that may had caused long lags in the 
> replication.
> For secured clusters under different kerberos realms (with cross-realm 
> properly set), though, current SyncTable version would fail to authenticate 
> with the remote cluster when trying to read HashTable outputs (when 
> *sourcehashdir* is remote) and also when trying to read table data on the 
> remote cluster (when *sourcezkcluster* is remote).
> The hdfs error would look like this:
> {noformat}
> INFO mapreduce.Job: Task Id : attempt_1524358175778_105392_m_00_0, Status 
> : FAILED
> Error: java.io.IOException: Failed on local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]; Host Details : local host is: "local-host/1.1.1.1"; 
> destination host is: "remote-nn":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1506)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1439)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>         at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:256)
> ...
>         at 
> org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.readPropertiesFile(HashTable.java:144)
>         at 
> org.apache.hadoop.hbase.mapreduce.HashTable$TableHash.read(HashTable.java:105)
>         at 
> org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.setup(SyncTable.java:188)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
> ...
> Caused by: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]{noformat}
> The above can be sorted if the SyncTable job acquires a DT for the remote NN. 
> Once hdfs related authentication is done, it's also necessary to authenticate 
> against remote HBase, as the below error would arise:
> {noformat}
> INFO mapreduce.Job: Task Id : attempt_1524358175778_172414_m_00_0, Status 
> : FAILED
> Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get 
> the location
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
> ...
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:867)
> at 
> org.apache.hadoop.hbase.mapreduce.SyncTable$SyncMapper.syncRange(SyncTable.java:331)
> ...
> Caused by: java.io.IOException: Could not set up IO Streams to 
> remote-rs-host/1.1.1.2:60020
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:786)
> ...
> Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'.
> ...
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
> ...{noformat}
> The above would need additional authentication logic against the remote hbase 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20878) Data loss if merging regions while ServerCrashProcedure executing

2018-07-12 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20878:

Fix Version/s: 2.1.1
   2.0.2
   3.0.0

> Data loss if merging regions while ServerCrashProcedure executing
> -
>
> Key: HBASE-20878
> URL: https://issues.apache.org/jira/browse/HBASE-20878
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-20878.branch-2.0.001.patch
>
>
> In MergeTableRegionsProcedure, we close the regions to merge using 
> UnassignProcedure. But, if the RS these regions on is crashed, a 
> ServerCrashProcedure will execute at the same time. UnassignProcedures will 
> be blockd until all logs are split. But since these regions are closed for 
> merging, the regions won't open again, the recovered.edit in the region dir 
> won't be replay, thus, data will loss.
> I provided a test to repo this case. I seriously doubt Split region procedure 
> also has this kind of problem. I will check later



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20877) Hbase-1.2.0 OldWals age getting filled and not purged by Hmaster

2018-07-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541785#comment-16541785
 ] 

Sean Busbey commented on HBASE-20877:
-

please don't rm WALs without understanding why they're present. The most likely 
reason for them still being around is that replication has not yet managed to 
successfully send them to your indexer. Please follow the steps from my email 
to user@hbase:

https://lists.apache.org/thread.html/221e18a4a861ff6736cb17036ce17f410027046fd7f00fb80bfd11f1@%3Cuser.hbase.apache.org%3E

> Hbase-1.2.0 OldWals age getting filled and not purged by Hmaster
> 
>
> Key: HBASE-20877
> URL: https://issues.apache.org/jira/browse/HBASE-20877
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.2.0
>Reporter: Manjeet Singh
>Priority: Major
>
> Hbase version 1.2.0 OldWals are getting filled and showing as below
> 7.2 K 21.5 K /hbase/.hbase-snapshot
> 0 0 /hbase/.tmp
> 0 0 /hbase/MasterProcWALs
> 18.3 G 60.2 G /hbase/WALs
> 28.7 G 86.1 G /hbase/archive
> 0 0 /hbase/corrupt
> 1.7 T 5.2 T /hbase/data
> 42 126 /hbase/hbase.id
> 7 21 /hbase/hbase.version
> 7.2 T 21.6 T /hbase/oldWALs
>  
> It;s not getting purged by Hmaster as oldWals are supposed to be cleaned in 
> master background chore, HBASE-20352(for 1.x version) is created to speed up 
> cleaning oldWals, in our case it's not happening.
> hbase.master.logcleaner.ttl is 1 minutes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540764#comment-16540764
 ] 

Sean Busbey commented on HBASE-20838:
-

+1

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Attachments: HBASE-20838.patch, HBASE-20838.patch, 
> HBASE-20838.v2.patch
>
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters

2018-07-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540615#comment-16540615
 ] 

Sean Busbey commented on HBASE-18477:
-

FYI, haven't forgotten about this. just a bit underwater. should be back with 
bandwidth next week.

> Umbrella JIRA for HBase Read Replica clusters
> -
>
> Key: HBASE-18477
> URL: https://issues.apache.org/jira/browse/HBASE-18477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase 
> Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope 
> doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf
>
>
> Recently, changes (such as HBASE-17437) have unblocked HBase to run with a 
> root directory external to the cluster (such as in Amazon S3). This means 
> that the data is stored outside of the cluster and can be accessible after 
> the cluster has been terminated. One use case that is often asked about is 
> pointing multiple clusters to one root directory (sharing the data) to have 
> read resiliency in the case of a cluster failure.
>  
> This JIRA is an umbrella JIRA to contain all the tasks necessary to create a 
> read-replica HBase cluster that is pointed at the same root directory.
>  
> This requires making the Read-Replica cluster Read-Only (no metadata 
> operation or data operations).
> Separating the hbase:meta table for each cluster (Otherwise HBase gets 
> confused with multiple clusters trying to update the meta table with their ip 
> addresses)
> Adding refresh functionality for the meta table to ensure new metadata is 
> picked up on the read replica cluster.
> Adding refresh functionality for HFiles for a given table to ensure new data 
> is picked up on the read replica cluster.
>  
> This can be used with any existing cluster that is backed by an external 
> filesystem.
>  
> Please note that this feature is still quite manual (with the potential for 
> automation later).
>  
> More information on this particular feature can be found here: 
> https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-07-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540606#comment-16540606
 ] 

Sean Busbey commented on HBASE-20257:
-

sweet. checking again.

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch, HBASE-20257.v05.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-07-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540112#comment-16540112
 ] 

Sean Busbey commented on HBASE-20649:
-

Yeah, the steps I listed are what I'd like to document for operators. Or maybe 
a summary like e.g. "when the files look like they're in the archive directory 
you should check for tables with references as a result of cloning and for 
snapshots" with a pointer back here for the specific step-by-step of commands 
to run.

I agree that automating more of determining what actions are needed to clean 
things up for upgrade would be useful. I'd like to have it wait for follow-on 
work since at the moment we're dealing with cleanup for what's long been an 
"experimental" data block encoding and I know [~balazs.meszaros]'s time is 
limited and this particular work has been going back and forth for ~a month.

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, 
> HBASE-20649.master.004.patch, HBASE-20649.master.005.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-11 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540102#comment-16540102
 ] 

Sean Busbey commented on HBASE-20838:
-

change looks good. let me test it out locally here. we'll need an update to the 
jira subject / description, I think?

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Attachments: HBASE-20838.patch, HBASE-20838.patch, 
> HBASE-20838.v2.patch
>
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-07-10 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538704#comment-16538704
 ] 

Sean Busbey commented on HBASE-20649:
-

what do y'all think about the outlined steps [~zyork] or [~Apache9]?

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, 
> HBASE-20649.master.004.patch, HBASE-20649.master.005.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-10 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538669#comment-16538669
 ] 

Sean Busbey commented on HBASE-20838:
-

{quote}

+1  unit202m 31sroot in the patch passed.
+1  unit127m 41shbase-server in the patch passed.
{quote}

we shouldn't add hbase-server if we're going to run the tests in root.

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Attachments: HBASE-20838.patch, HBASE-20838.patch
>
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-10 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538485#comment-16538485
 ] 

Sean Busbey commented on HBASE-20838:
-

sure testing locally works as does attaching a patch here that has multiple 
commits, one for your change and then others to alter files.

I test locally by using the "Mac Homebrew" instructions on the [bottom of hte 
Yetus download page|http://yetus.apache.org/downloads/]. Then I test with a 
local patch file that changes things the hbase personality will respond to 
either on a branch with my changes or by pointing at a copy of hte personality 
with the changes.

e.g. If I want to know if the unit tests change:

{code}
$ test-patch --personality=dev-support/hbase-personality.sh 
--plugins=maven,java,compile,mvninstall,unit 
/some/path/to/a/test/FOOBAR-1234.patch
{code}



> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
> Attachments: HBASE-20838.patch
>
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20723) Custom hbase.wal.dir results in data loss because we write recovered edits into a different place than where the recovering region server looks for them

2018-07-09 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537537#comment-16537537
 ] 

Sean Busbey commented on HBASE-20723:
-

ping [~stack] this looks like another good candidate for 2.0.2

> Custom hbase.wal.dir results in data loss because we write recovered edits 
> into a different place than where the recovering region server looks for them
> 
>
> Key: HBASE-20723
> URL: https://issues.apache.org/jira/browse/HBASE-20723
> Project: HBase
>  Issue Type: Bug
>  Components: Recovery, wal
>Affects Versions: 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 2.0.0
>Reporter: Rohan Pednekar
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.1.0, 1.5.0, 1.4.6
>
> Attachments: 20723.branch-1.txt, 20723.branch-2.txt, 20723.v1.txt, 
> 20723.v10.txt, 20723.v2.txt, 20723.v3.txt, 20723.v4.txt, 20723.v5.txt, 
> 20723.v5.txt, 20723.v6.txt, 20723.v7.txt, 20723.v8.txt, 20723.v9.txt, logs.zip
>
>
> Description:
> When custom hbase.wal.dir is configured the recovery system uses it in place 
> of the HBase root dir and thus constructs an incorrect path for recovered 
> edits when splitting WALs. This causes the recovery code in Region Servers to 
> believe there are no recovered edits to replay, which causes a loss of writes 
> that had not flushed prior to loss of a server.
>  
> Reproduction:
> This is an Azure HDInsight HBase cluster with HDP 2.6. and HBase 
> 1.1.2.2.6.3.2-14 
> By default the underlying data is going to wasb://x@y/hbase 
>  I tried to move WAL folders to HDFS, which is the SSD mounted on each VM at 
> /mnt.
> hbase.wal.dir= hdfs://mycluster/walontest
> hbase.wal.dir.perms=700
> hbase.rootdir.perms=700
> hbase.rootdir= 
> wasb://XYZ[@hbaseperf.core.net|mailto:duohbase5ds...@duohbaseperf.blob.core.windows.net]/hbase
> Procedure to reproduce this issue:
> 1. create a table in hbase shell
> 2. insert a row in hbase shell
> 3. reboot the VM which hosts that region
> 4. scan the table in hbase shell and it is empty
> Looking at the region server logs:
> {code:java}
> 2018-06-12 22:08:40,455 INFO  [RS_LOG_REPLAY_OPS-wn2-duohba:16020-0-Writer-1] 
> wal.WALSplitter: This region's directory doesn't exist: 
> hdfs://mycluster/walontest/data/default/tb1/b7fd7db5694eb71190955292b3ff7648. 
> It is very likely that it was already split so it's safe to discard those 
> edits.
> {code}
> The log split/replay ignored actual WAL due to WALSplitter is looking for the 
> region directory in the hbase.wal.dir we specified rather than the 
> hbase.rootdir.
> Looking at the source code,
>  
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java]
>  it uses the rootDir, which is walDir, as the tableDir root path.
> So if we use HBASE-17437, waldir and hbase rootdir are in different path or 
> even in different filesystem, then the #5 uses walDir as tableDir is 
> apparently wrong.
> CC: [~zyork], [~yuzhih...@gmail.com] Attached the logs for quick review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-07-09 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537183#comment-16537183
 ] 

Sean Busbey commented on HBASE-20257:
-

can someone please clean up the failures in precommit? they look related.

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20336) org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad fails

2018-07-09 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-20336.
-
   Resolution: Cannot Reproduce
Fix Version/s: (was: 3.0.0)

This has been fixed at some point as of ref 59867b. I still get the warning 
about stdout, but the test no longer fails.

{code}
10:43:31,173 [WARNING] Corrupted STDOUT by directly writing to native stream in 
forked JVM 1. See FAQ web page and the dump file 
/Users/busbey/tmp_projects/hbase/hbase-spark-it/target/failsafe-reports/2018-07-09T10-43-30_826-jvmRun1.dumpstream
10:43:32,202 [INFO] Running 
org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad
10:46:21,090 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 168.835 s - in 
org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad
10:46:21,746 [INFO] 
10:46:21,746 [INFO] Results:
10:46:21,746 [INFO] 
10:46:21,746 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

{code}

> org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad fails
> 
>
> Key: HBASE-20336
> URL: https://issues.apache.org/jira/browse/HBASE-20336
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Priority: Blocker
>
> running {{mvn verify}} for the spark integration tests against current master 
> fails
> {code}
> $ mvn -DskipTests install
> 11:37:26,815 [INFO] Scanning for projects...
> ...
> 11:43:36,711 [INFO] 
> 
> 11:43:36,711 [INFO] Reactor Summary:
> 11:43:36,711 [INFO] 
> 11:43:36,712 [INFO] Apache HBase ... 
> SUCCESS [  6.294 s]
> 11:43:36,712 [INFO] Apache HBase - Checkstyle .. 
> SUCCESS [  1.742 s]
> 11:43:36,712 [INFO] Apache HBase - Build Support ... 
> SUCCESS [  0.070 s]
> 11:43:36,712 [INFO] Apache HBase - Error Prone Rules ... 
> SUCCESS [  2.206 s]
> 11:43:36,712 [INFO] Apache HBase - Annotations . 
> SUCCESS [  1.413 s]
> 11:43:36,712 [INFO] Apache HBase - Build Configuration . 
> SUCCESS [  0.254 s]
> 11:43:36,712 [INFO] Apache HBase - Shaded Protocol . 
> SUCCESS [ 37.870 s]
> 11:43:36,712 [INFO] Apache HBase - Common .. 
> SUCCESS [ 12.526 s]
> 11:43:36,712 [INFO] Apache HBase - Metrics API . 
> SUCCESS [  2.412 s]
> 11:43:36,712 [INFO] Apache HBase - Hadoop Compatibility  
> SUCCESS [  3.260 s]
> 11:43:36,712 [INFO] Apache HBase - Metrics Implementation .. 
> SUCCESS [  2.756 s]
> 11:43:36,713 [INFO] Apache HBase - Hadoop Two Compatibility  
> SUCCESS [  3.959 s]
> 11:43:36,713 [INFO] Apache HBase - Protocol  
> SUCCESS [ 11.295 s]
> 11:43:36,713 [INFO] Apache HBase - Client .. 
> SUCCESS [ 15.360 s]
> 11:43:36,713 [INFO] Apache HBase - Zookeeper ... 
> SUCCESS [  4.389 s]
> 11:43:36,713 [INFO] Apache HBase - Replication . 
> SUCCESS [  4.202 s]
> 11:43:36,713 [INFO] Apache HBase - Resource Bundle . 
> SUCCESS [  0.206 s]
> 11:43:36,713 [INFO] Apache HBase - HTTP  
> SUCCESS [  8.530 s]
> 11:43:36,713 [INFO] Apache HBase - Procedure ... 
> SUCCESS [  4.196 s]
> 11:43:36,713 [INFO] Apache HBase - Server .. 
> SUCCESS [ 44.604 s]
> 11:43:36,713 [INFO] Apache HBase - MapReduce ... 
> SUCCESS [ 11.122 s]
> 11:43:36,713 [INFO] Apache HBase - Testing Util  
> SUCCESS [  6.633 s]
> 11:43:36,713 [INFO] Apache HBase - Thrift .. 
> SUCCESS [  9.771 s]
> 11:43:36,713 [INFO] Apache HBase - RSGroup . 
> SUCCESS [  6.703 s]
> 11:43:36,713 [INFO] Apache HBase - Shell ... 
> SUCCESS [  7.094 s]
> 11:43:36,713 [INFO] Apache HBase - Coprocessor Endpoint  
> SUCCESS [  7.542 s]
> 11:43:36,713 [INFO] Apache HBase - Backup .. 
> SUCCESS [  6.246 s]
> 11:43:36,713 [INFO] Apache HBase - Integration Tests ... 
> SUCCESS [  7.461 s]
> 11:43:36,713 [INFO] Apache HBase - Examples  
> SUCCESS [  9.054 s]
> 11:43:36,713 [INFO] Apache HBase - Rest  
> SUCCESS [  8.972 s]
> 11:43:36,713 [INFO] Apache HBase - External Block Cache  
> SUCCESS [  5.180 s]
> 11:43:36,713 [INFO] Apache HBase - Spark ... 
> SUCCESS [01:10 min]
> 11:43:36,713 [INFO] Apache HBase - Spark 

[jira] [Commented] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-07-09 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537055#comment-16537055
 ] 

Sean Busbey commented on HBASE-20257:
-

it's started now:

https://builds.apache.org/job/PreCommit-HBASE-Build/13550/

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-07-09 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537054#comment-16537054
 ] 

Sean Busbey commented on HBASE-20257:
-

sure. lemme start by getting a current precommit run.

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-09 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537053#comment-16537053
 ] 

Sean Busbey commented on HBASE-20838:
-

the file that needs to change is at {{dev-support/hbase-personality.sh}}. 
Specifically the method that needs to be update is this one:

{code}
## @description  Queue up modules for this personality
## @audience private
## @stabilityevolving
## @paramrepostatus
## @paramtesttype
function personality_modules
{code}

This is the method that tells Apache Yetus which modules need to run for a 
given test.

Within the method we keep track of which modules to run with this variable:

{code}
local MODULES=("${CHANGED_MODULES[@]}")
{code}

Here in the declaration we start it off being equivalent to what Yetus provides 
us as the list of modules with changes.

In the rest of the method we have various checks for specific kinds of tests 
that need different things. e.g. "If you're doing findbugs don't bother running 
against the hbase-shell module" or "if the hbase-checkstyle module has changed 
then run the checkstyle test at the top of the project instead of for any 
particular module."

The method currently doesn't make use of it, but Yetus also provides a list of 
all the files that changed in a global array variable named {{CHANGED_FILES}}. 
I believe the correct thing to do is to start with the block for "if we're 
running unit tests do this prep work"

{code}
  # If EXCLUDE_TESTS_URL/INCLUDE_TESTS_URL is set, fetches the url
  # and sets -Dtest.exclude.pattern/-Dtest to exclude/include the
  # tests respectively.
  if [[ ${testtype} == unit ]]; then
local tests_arg=""
get_include_exclude_tests_arg tests_arg
extra="${extra} -PrunAllTests ${tests_arg}"

# Inject the jenkins build-id for our surefire invocations
# Used by zombie detection stuff, even though we're not including that yet.
if [ -n "${BUILD_ID}" ]; then
  extra="${extra} -Dbuild.id=${BUILD_ID}"
fi
  fi
{code}

And add an additional stanza after the BUILD_ID stuff that basically says "If 
the set of changed files includes CommonFSUtils then add the hbase-server 
module to the set of modules to be tested". we'll also need to check that we're 
not doing a full build and that neither the hbase-server nor "top of project" 
are already in the set of modules to test.

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-07-06 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535476#comment-16535476
 ] 

Sean Busbey commented on HBASE-20649:
-

okay, I think this can work. We just need to add some more info to the section 
explaining how to interpret the output. Before we push forward on this, folks 
should read through and see if we're asking too much of operators.

On my test cluster I made PREFIX_TREE table, inserted data, flushed it, 
snapshot it, cloned the snapshot, then altered both tables to change the dbe to 
something other than PREFIX_TREE. Then I started from the assumption of not 
knowing that had happened and relying on the pre-upgrade tool to figure out how 
to make things work.

Each iteration I ran the same command: {{hbase --config /etc/hbase/conf 
pre-upgrade validate-hfile}}

h3. first run

Tool complains about the file in {{example}} table, the first flush. Here's the 
output
{code}

18/07/06 15:46:33 WARN hbck.HFileCorruptionChecker: Found corrupt HFile 
hdfs://busbey-hbase-20649-1.example.com:8020/hbase/data/default/example/624357cffd1fae4422663c98155de45b/f1/bfc569db5fa543f5ba69bab594a85cea
org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
Trailer from file 
hdfs://busbey-hbase-20649-1.example.com:8020/hbase/data/default/example/624357cffd1fae4422663c98155de45b/f1/bfc569db5fa543f5ba69bab594a85cea
at org.apache.hadoop.hbase.io.hfile.HFile.openReader(HFile.java:545)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:611)
at 
org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkHFile(HFileCorruptionChecker.java:101)
at 
org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkColFamDir(HFileCorruptionChecker.java:185)
at 
org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker.checkRegionDir(HFileCorruptionChecker.java:323)
at 
org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:408)
at 
org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker$RegionDirChecker.call(HFileCorruptionChecker.java:399)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Invalid data block encoding type in file info: 
PREFIX_TREE
at 
org.apache.hadoop.hbase.io.hfile.HFileDataBlockEncoderImpl.createFromFileInfo(HFileDataBlockEncoderImpl.java:58)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.(HFileReaderImpl.java:246)
at org.apache.hadoop.hbase.io.hfile.HFile.openReader(HFile.java:538)
... 14 more
Caused by: java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.PREFIX_TREE
at java.lang.Enum.valueOf(Enum.java:238)
at 
org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.valueOf(DataBlockEncoding.java:31)
at 
org.apache.hadoop.hbase.io.hfile.HFileDataBlockEncoderImpl.createFromFileInfo(HFileDataBlockEncoderImpl.java:56)
... 16 more
18/07/06 15:46:33 INFO tool.HFileContentValidator: Validating HFile contents 
under hdfs://busbey-hbase-20649-1.example.com:8020/hbase/archive
18/07/06 15:46:33 WARN tool.HFileContentValidator: Corrupt file: 
hdfs://busbey-hbase-20649-1.example.com:8020/hbase/data/default/example/624357cffd1fae4422663c98155de45b/f1/bfc569db5fa543f5ba69bab594a85cea
18/07/06 15:46:33 WARN tool.HFileContentValidator: There are 1 corrupted 
HFiles. Change data block encodings before upgrading. Check 
https://s.apache.org/prefixtree for instructions.
{code}

I think given the path {{/hbase/data/default/example/}} it's straight forward 
to reason "I need to do a major compaction of the example table". So I did that.

h3. second run

The tool complains about the same file, but this time it's in the archive 
directory.

{code}

18/07/06 15:50:42 INFO tool.HFileContentValidator: Validating HFile contents 
under hdfs://busbey-hbase-20649-1.example.com:8020/hbase/archive
18/07/06 15:50:42 WARN hbck.HFileCorruptionChecker: Found corrupt HFile 
hdfs://busbey-hbase-20649-1.example.com:8020/hbase/archive/data/default/example/624357cffd1fae4422663c98155de45b/f1/bfc569db5fa543f5ba69bab594a85cea
org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 

[jira] [Updated] (HBASE-20856) PITA having to set WAL provider in two places

2018-07-06 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20856:

Priority: Minor  (was: Major)

> PITA having to set WAL provider in two places
> -
>
> Key: HBASE-20856
> URL: https://issues.apache.org/jira/browse/HBASE-20856
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, wal
>Reporter: stack
>Priority: Minor
> Fix For: 2.0.2, 2.2.0, 2.1.1
>
>
> Courtesy of [~elserj], I learn that changing WAL we need to set two places... 
> both hbase.wal.meta_provider and hbase.wal.provider. Operator should only 
> have to set it in one place; hbase.wal.meta_provider should pick up general 
> setting unless hbase.wal.meta_provider is explicitly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20856) PITA having to set WAL provider in two places

2018-07-06 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20856:

Fix Version/s: 2.1.1
   2.2.0
   2.0.2

> PITA having to set WAL provider in two places
> -
>
> Key: HBASE-20856
> URL: https://issues.apache.org/jira/browse/HBASE-20856
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, wal
>Reporter: stack
>Priority: Major
> Fix For: 2.0.2, 2.2.0, 2.1.1
>
>
> Courtesy of [~elserj], I learn that changing WAL we need to set two places... 
> both hbase.wal.meta_provider and hbase.wal.provider. Operator should only 
> have to set it in one place; hbase.wal.meta_provider should pick up general 
> setting unless hbase.wal.meta_provider is explicitly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20856) PITA having to set WAL provider in two places

2018-07-06 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535211#comment-16535211
 ] 

Sean Busbey commented on HBASE-20856:
-

yeah that's a good idea.

> PITA having to set WAL provider in two places
> -
>
> Key: HBASE-20856
> URL: https://issues.apache.org/jira/browse/HBASE-20856
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, wal
>Reporter: stack
>Priority: Major
> Fix For: 2.0.2, 2.2.0, 2.1.1
>
>
> Courtesy of [~elserj], I learn that changing WAL we need to set two places... 
> both hbase.wal.meta_provider and hbase.wal.provider. Operator should only 
> have to set it in one place; hbase.wal.meta_provider should pick up general 
> setting unless hbase.wal.meta_provider is explicitly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20856) PITA having to set WAL provider in two places

2018-07-06 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20856:

Issue Type: Improvement  (was: Bug)

> PITA having to set WAL provider in two places
> -
>
> Key: HBASE-20856
> URL: https://issues.apache.org/jira/browse/HBASE-20856
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability, wal
>Reporter: stack
>Priority: Major
> Fix For: 2.0.2, 2.2.0, 2.1.1
>
>
> Courtesy of [~elserj], I learn that changing WAL we need to set two places... 
> both hbase.wal.meta_provider and hbase.wal.provider. Operator should only 
> have to set it in one place; hbase.wal.meta_provider should pick up general 
> setting unless hbase.wal.meta_provider is explicitly set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20851) Change rubocop config for max line length of 100

2018-07-06 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20851:

Component/s: community

> Change rubocop config for max line length of 100
> 
>
> Key: HBASE-20851
> URL: https://issues.apache.org/jira/browse/HBASE-20851
> Project: HBase
>  Issue Type: Bug
>  Components: community, shell
>Affects Versions: 2.0.1
>Reporter: Umesh Agashe
>Priority: Minor
>  Labels: beginner, beginners
>
> Existing ruby and Java code uses max line length of 100 characters. Change 
> rubocop config with:
> {code:java}
> Metrics/LineLength:
>   Max: 100
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-6028) Implement a cancel for in-progress compactions

2018-07-06 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534835#comment-16534835
 ] 

Sean Busbey commented on HBASE-6028:


thanks!

> Implement a cancel for in-progress compactions
> --
>
> Key: HBASE-6028
> URL: https://issues.apache.org/jira/browse/HBASE-6028
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Derek Wollenstein
>Assignee: Mohit Goel
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-6028.master.007.patch, 
> HBASE-6028.master.008.patch, HBASE-6028.master.008.patch, 
> HBASE-6028.master.009.patch
>
>
> Depending on current server load, it can be extremely expensive to run 
> periodic minor / major compactions.  It would be helpful to have a feature 
> where a user could use the shell or a client tool to explicitly cancel an 
> in-progress compactions.  This would allow a system to recover when too many 
> regions became eligible for compactions at once



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-06 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534795#comment-16534795
 ] 

Sean Busbey commented on HBASE-20838:
-

Let me know if you want help on figuring out the precommit changes to do this, 
btw. I know the test rig code can be intimidating.

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-06 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534786#comment-16534786
 ] 

Sean Busbey edited comment on HBASE-20838 at 7/6/18 12:54 PM:
--

we should also have a note in CommonFSUtils that its functionality is tested in 
hbase-server's TestFSUtils and that precommit is going to run the hbase-server 
tests if the file changes, since we already know hbase-server tests are a large 
time hit and it'll be surprising.


was (Author: busbey):
we should also have a note in CommonFSUtils that its functionality is tested in 
hbase-server's TestFSUtils and that precommit is going to run the hbase-server 
tests if the file changes, since we already know hbase-server tests is a large 
time hit and it'll be surprising.

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-06 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534786#comment-16534786
 ] 

Sean Busbey commented on HBASE-20838:
-

we should also have a note in CommonFSUtils that its functionality is tested in 
hbase-server's TestFSUtils and that precommit is going to run the hbase-server 
tests if the file changes, since we already know hbase-server tests is a large 
time hit and it'll be surprising.

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-07-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534145#comment-16534145
 ] 

Sean Busbey commented on HBASE-20649:
-

Let me stand up my test cluster again and work through it again.  running 
across archive sounds like a viable solution.

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Balazs Meszaros
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch, HBASE-20649.master.003.patch, 
> HBASE-20649.master.004.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20749) Upgrade our use of checkstyle to 8.6+

2018-07-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533748#comment-16533748
 ] 

Sean Busbey commented on HBASE-20749:
-

checking this requires

1. Get our current project error count
2. Update to the proposed fix (may require changing our configs as well)
3. note our error count after

presuming the error count changed, make sure none of the new errors are related 
to separate import groups and separate static imports.

> Upgrade our use of checkstyle to 8.6+
> -
>
> Key: HBASE-20749
> URL: https://issues.apache.org/jira/browse/HBASE-20749
> Project: HBase
>  Issue Type: Improvement
>  Components: build, community
>Reporter: Sean Busbey
>Priority: Minor
>
> We should upgrade our checkstyle version to 8.6 or later so we can use the 
> "match violation message to this regex" feature for suppression. That will 
> allow us to make sure we don't regress on HTrace v3 vs v4 APIs (came up in 
> HBASE-20332).
> We're currently blocked on upgrading to 8.3+ by [checkstyle 
> #5279|https://github.com/checkstyle/checkstyle/issues/5279], a regression 
> that flags our use of both the "separate import groups" and "put static 
> imports over here" configs as an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20749) Upgrade our use of checkstyle to 8.6+

2018-07-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533722#comment-16533722
 ] 

Sean Busbey commented on HBASE-20749:
-

the checkstyle team has a proposed fix up for test/comment and would like our 
opinion on it:

https://github.com/checkstyle/checkstyle/issues/5279#issuecomment-402635861

> Upgrade our use of checkstyle to 8.6+
> -
>
> Key: HBASE-20749
> URL: https://issues.apache.org/jira/browse/HBASE-20749
> Project: HBase
>  Issue Type: Improvement
>  Components: build, community
>Reporter: Sean Busbey
>Priority: Minor
>
> We should upgrade our checkstyle version to 8.6 or later so we can use the 
> "match violation message to this regex" feature for suppression. That will 
> allow us to make sure we don't regress on HTrace v3 vs v4 APIs (came up in 
> HBASE-20332).
> We're currently blocked on upgrading to 8.3+ by [checkstyle 
> #5279|https://github.com/checkstyle/checkstyle/issues/5279], a regression 
> that flags our use of both the "separate import groups" and "put static 
> imports over here" configs as an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20838) Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils

2018-07-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533632#comment-16533632
 ] 

Sean Busbey commented on HBASE-20838:
-

definitely would not like to add an HDFS dependency on hbase-common, even in 
test scope.

I presume there's a reason we can't move the setStoragePolicy stuff from 
CommonFSUtils to FSUtils? What if in precommit we add hbase-server to the set 
of "needs a test run" modules if CommonFSUtils is in the list of modified files?

> Move all setStorage related UT cases from TestFSUtils to TestCommonFSUtils
> --
>
> Key: HBASE-20838
> URL: https://issues.apache.org/jira/browse/HBASE-20838
> Project: HBase
>  Issue Type: Test
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Major
>
> As per 
> [discussed|https://issues.apache.org/jira/browse/HBASE-20691?focusedCommentId=16517662=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16517662]
>  in HBASE-20691, since the setStoragePolicy code is in CommonFSUtils, the 
> test should be in TestCommonFSUtils



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20841) Add note about the limitations when running WAL against the recent versions of hadoop

2018-07-03 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532261#comment-16532261
 ] 

Sean Busbey commented on HBASE-20841:
-

good message. I'd guess folks who run into problems are unlikely to find it by 
searching though. Would adding a blurb to the troubleshooting section that has 
some existing known failures help increase their likelihood of finding help?

> Add note about the limitations when running WAL against the recent versions 
> of hadoop
> -
>
> Key: HBASE-20841
> URL: https://issues.apache.org/jira/browse/HBASE-20841
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20841.patch
>
>
> AsyncFSWAL may easily be broken when upgrading the DFSClient, so we 
> introduced a fallback logic in HBASE-20839. And also, WAL can not be written 
> into a directory with EC enabled, but the API for creating a non-EC file in 
> EC directory is not available in hadoop-2.8-, see HBASE-19369 for more 
> details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20502) Document HBase incompatible with Yarn 2.9.0 and 3.0.x due to YARN-7190

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20502:

Status: Patch Available  (was: Open)

I've updated the patch:

v1
  - updated for current master branch
  - calls out Hadoop 3.1.0 as X due to Hadoop PMC saying it isn't production 
ready
  - tweak message about presence of YARN-7190

Let's get this documented and then we can push on the Hadoop folks to get 
YARN-7190 into a Hadoop 3.0.z release.

> Document HBase incompatible with Yarn 2.9.0 and 3.0.x due to YARN-7190
> --
>
> Key: HBASE-20502
> URL: https://issues.apache.org/jira/browse/HBASE-20502
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, documentation
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-20502.1.patch, HBASE-20502.patch
>
>
> We need to call out hadoop-yarn 2.9.0 and the entire 3.0.x line as explicitly 
> unsupported due to needing YARN-7190 fixed in versions that have ATS 
> available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20502) Document HBase incompatible with Yarn 2.9.0 and 3.0.x due to YARN-7190

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20502:

Attachment: HBASE-20502.1.patch

> Document HBase incompatible with Yarn 2.9.0 and 3.0.x due to YARN-7190
> --
>
> Key: HBASE-20502
> URL: https://issues.apache.org/jira/browse/HBASE-20502
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, documentation
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-20502.1.patch, HBASE-20502.patch
>
>
> We need to call out hadoop-yarn 2.9.0 and the entire 3.0.x line as explicitly 
> unsupported due to needing YARN-7190 fixed in versions that have ATS 
> available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-03 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532219#comment-16532219
 ] 

Sean Busbey commented on HBASE-20244:
-

{quote}
Sean Busbey Please see HBASE-20839.
{quote}

this and HBASE-20841 are great, thanks for filing them.

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt, 
> HBASE-20244-v1.patch, HBASE-20244.patch
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-03 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16532218#comment-16532218
 ] 

Sean Busbey commented on HBASE-20244:
-

{quote}
And I'm a little confused by the release lines of hadoop. In HBase we only 
consider 2.7.x as stable, but look at the release page of 
http://hadoop.apache.org/releases.html, they seem to remove the 'not production 
ready' words silently for 2.8.x, 2.9.x, and also 3.0.x. Does this mean this 
release lines are all production ready? Do we need to add them back in our pre 
commit test? And also upgrade the support matrix?
{quote}

It's a bit of a mess, I'm afraid. AFAICT they only mention the 'not ready for 
production' stuff in their release announcement. So I try to link to the 
appropriate email in the [Hadoop support section of the ref 
guide|http://hbase.apache.org/book.html#hadoop].

IIRC, Hadoop 2.8.2 's release email said it was production ready but 2.8.3 is 
what [~stack] actually used when doing HBase 2.0.0 testing hence the current 
support matrix. (the 2.8.2 going to "NT" was a part of HBASE-19983).

HBASE-20502 started the process of updating our support matrix for 2.9.1 losing 
the "not production ready" note (as well as needing to call out 2.9.0 and 3.0.z 
as X due to classpath problems). IIRC it was delayed because of hope that some 
Hadoop 3.0.z version could be moved from X to NT. I pinged the ticket last week 
because that doesn't seem to be the case. 

Adding 2.8 and/or 2.9 to precommit is tracked in HBASE-19984. Since that ticket 
stalled out the HBase 2.0.z release line added 2.8.3 as supported, so probably 
we should add 2.8 in.

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt, 
> HBASE-20244-v1.patch, HBASE-20244.patch
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> 

[jira] [Commented] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-03 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16531532#comment-16531532
 ] 

Sean Busbey commented on HBASE-20244:
-

please link a follow on jira that addresses my review feedback, repeated here:

{quote}
Please include a note in the Hadoop section of the ref guide calling out that 
users of Hadoop versions with the causes of these breakages will need to change 
their WAL provider to filesystem (for HBase 2.y.z versions prior to this fix).
{quote}

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt, 
> HBASE-20244-v1.patch, HBASE-20244.patch
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-6028) Implement a cancel for in-progress compactions

2018-07-03 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530889#comment-16530889
 ] 

Sean Busbey commented on HBASE-6028:


bq. 2 of the rubocop messages are about 'Line too long'. Rest of the ruby code 
has 100 chars wide lines, rubocop expects 80. These messages can be ignored. 

Please make sure there is a jira to correct our rubocop configs if it has the 
incorrect line length. plenty of the code base doesn't conform to our style 
guidelines; we try to fix them incrementally.

> Implement a cancel for in-progress compactions
> --
>
> Key: HBASE-6028
> URL: https://issues.apache.org/jira/browse/HBASE-6028
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Derek Wollenstein
>Assignee: Mohit Goel
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-6028.master.007.patch, 
> HBASE-6028.master.008.patch, HBASE-6028.master.008.patch
>
>
> Depending on current server load, it can be extremely expensive to run 
> periodic minor / major compactions.  It would be helpful to have a feature 
> where a user could use the shell or a client tool to explicitly cancel an 
> in-progress compactions.  This would allow a system to recover when too many 
> regions became eligible for compactions at once



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20837) Make IDE configuration for import order match that in our checkstyle module

2018-07-02 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20837:

Summary: Make IDE configuration for import order match that in our 
checkstyle module  (was: Sync import orders between IDE and checkstyle module)

> Make IDE configuration for import order match that in our checkstyle module
> ---
>
> Key: HBASE-20837
> URL: https://issues.apache.org/jira/browse/HBASE-20837
> Project: HBase
>  Issue Type: Improvement
>  Components: community
>Affects Versions: 3.0.0, 2.0.1, 1.4.5
>Reporter: Tak Lon (Stephen) Wu
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: image-2018-07-02-15-27-41-140.png, 
> image-2018-07-02-16-33-18-604.png
>
>
> While working on HBASE-20557 contribution, we figured out that the checkstyle 
> build target (ImportOrder's `groups` 
> [http://checkstyle.sourceforge.net/config_imports.html] ) was different from 
> the development supported IDE (e.g. IntelliJ and Eclipse) formatter, we would 
> provide a fix here to sync between 
> [dev-support/hbase_eclipse_formatter.xml|https://github.com/apache/hbase/blob/master/dev-support/hbase_eclipse_formatter.xml]
>  and 
> [hbase/checkstyle.xml|https://github.com/apache/hbase/blob/master/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml]
> This might need to backport the changes of master to branch-1 and branch-2 as 
> well.
> Before this change, this is what checkstyle is expecting for import order
>  
> {code:java}
> import com.google.common.annotations.VisibleForTesting;
> import java.io.IOException;
> import java.util.ArrayList;
> import java.util.List;
> import java.util.Map;
> import org.apache.commons.logging.Log;
> import org.apache.commons.logging.LogFactory;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.classification.InterfaceAudience;
> import org.apache.hadoop.hbase.conf.ConfigurationObserver;{code}
>  
> And the proposed import order with the respect to HBASE-19262 and HBASE-19552 
> should be
>  
> !image-2018-07-02-16-33-18-604.png!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20837) Sync import orders between IDE and checkstyle module

2018-07-02 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20837:

Component/s: community

> Sync import orders between IDE and checkstyle module
> 
>
> Key: HBASE-20837
> URL: https://issues.apache.org/jira/browse/HBASE-20837
> Project: HBase
>  Issue Type: Improvement
>  Components: community
>Affects Versions: 3.0.0, 2.0.1, 1.4.5
>Reporter: Tak Lon (Stephen) Wu
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: image-2018-07-02-15-27-41-140.png, 
> image-2018-07-02-16-33-18-604.png
>
>
> While working on HBASE-20557 contribution, we figured out that the checkstyle 
> build target (ImportOrder's `groups` 
> [http://checkstyle.sourceforge.net/config_imports.html] ) was different from 
> the development supported IDE (e.g. IntelliJ and Eclipse) formatter, we would 
> provide a fix here to sync between 
> [dev-support/hbase_eclipse_formatter.xml|https://github.com/apache/hbase/blob/master/dev-support/hbase_eclipse_formatter.xml]
>  and 
> [hbase/checkstyle.xml|https://github.com/apache/hbase/blob/master/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml]
> This might need to backport the changes of master to branch-1 and branch-2 as 
> well.
> Before this change, this is what checkstyle is expecting for import order
>  
> {code:java}
> import com.google.common.annotations.VisibleForTesting;
> import java.io.IOException;
> import java.util.ArrayList;
> import java.util.List;
> import java.util.Map;
> import org.apache.commons.logging.Log;
> import org.apache.commons.logging.LogFactory;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.classification.InterfaceAudience;
> import org.apache.hadoop.hbase.conf.ConfigurationObserver;{code}
>  
> And the proposed import order with the respect to HBASE-19262 and HBASE-19552 
> should be
>  
> !image-2018-07-02-16-33-18-604.png!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20837) Sync import orders between IDE and checkstyle module

2018-07-02 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20837:

Fix Version/s: 2.2.0
   1.5.0
   3.0.0

> Sync import orders between IDE and checkstyle module
> 
>
> Key: HBASE-20837
> URL: https://issues.apache.org/jira/browse/HBASE-20837
> Project: HBase
>  Issue Type: Improvement
>  Components: community
>Affects Versions: 3.0.0, 2.0.1, 1.4.5
>Reporter: Tak Lon (Stephen) Wu
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: image-2018-07-02-15-27-41-140.png, 
> image-2018-07-02-16-33-18-604.png
>
>
> While working on HBASE-20557 contribution, we figured out that the checkstyle 
> build target (ImportOrder's `groups` 
> [http://checkstyle.sourceforge.net/config_imports.html] ) was different from 
> the development supported IDE (e.g. IntelliJ and Eclipse) formatter, we would 
> provide a fix here to sync between 
> [dev-support/hbase_eclipse_formatter.xml|https://github.com/apache/hbase/blob/master/dev-support/hbase_eclipse_formatter.xml]
>  and 
> [hbase/checkstyle.xml|https://github.com/apache/hbase/blob/master/hbase-checkstyle/src/main/resources/hbase/checkstyle.xml]
> This might need to backport the changes of master to branch-1 and branch-2 as 
> well.
> Before this change, this is what checkstyle is expecting for import order
>  
> {code:java}
> import com.google.common.annotations.VisibleForTesting;
> import java.io.IOException;
> import java.util.ArrayList;
> import java.util.List;
> import java.util.Map;
> import org.apache.commons.logging.Log;
> import org.apache.commons.logging.LogFactory;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.classification.InterfaceAudience;
> import org.apache.hadoop.hbase.conf.ConfigurationObserver;{code}
>  
> And the proposed import order with the respect to HBASE-19262 and HBASE-19552 
> should be
>  
> !image-2018-07-02-16-33-18-604.png!  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-02 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530756#comment-16530756
 ] 

Sean Busbey commented on HBASE-20244:
-

Please include a note in the Hadoop section of the ref guide calling out that 
users of Hadoop versions with the causes of these breakages will need to change 
their WAL provider to {{filesystem}}.

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-02 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530756#comment-16530756
 ] 

Sean Busbey edited comment on HBASE-20244 at 7/3/18 3:02 AM:
-

Please include a note in the Hadoop section of the ref guide calling out that 
users of Hadoop versions with the causes of these breakages will need to change 
their WAL provider to {{filesystem}} (for HBase 2.y.z versions prior to this 
fix).


was (Author: busbey):
Please include a note in the Hadoop section of the ref guide calling out that 
users of Hadoop versions with the causes of these breakages will need to change 
their WAL provider to {{filesystem}}.

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-02 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20244:

Affects Version/s: 2.0.0

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-02 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20244:

Affects Version/s: 2.0.1

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-07-02 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20244:

Component/s: wal

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2, 2.2.0
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16110) AsyncFS WAL doesn't work with Hadoop 2.8+

2018-07-02 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530352#comment-16530352
 ] 

Sean Busbey commented on HBASE-16110:
-

Hi [~timoha]!

Please bring issues running HBase to the user@hbase mailing list. JIRA is for 
tracking specific work tasks and this one is unlikely to get responses since it 
was resolved a couple of years ago.

> AsyncFS WAL doesn't work with Hadoop 2.8+
> -
>
> Key: HBASE-16110
> URL: https://issues.apache.org/jira/browse/HBASE-16110
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-16110-v1.patch, HBASE-16110.patch
>
>
> The async wal implementation doesn't work with Hadoop 2.8+. Fails compilation 
> and will fail running.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-16110) AsyncFS WAL doesn't work with Hadoop 2.8+

2018-07-02 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16530323#comment-16530323
 ] 

Sean Busbey edited comment on HBASE-16110 at 7/2/18 7:39 PM:
-

Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and hadoop 
2.8.4 client 
([http://central.maven.org/maven2/org/apache/hadoop/hadoop-client/2.8.4/]). Got 
the following exception on regionserver which brings it down:

{code}
18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was thrown by 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
java.lang.Error: Couldn't properly initialize access to HDFS internals. Please 
update your WAL Provider to not make use of the 'asyncfs' provider. See 
HBASE-16110 for more information.
     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
     at 
org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676)
     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552)
     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
     at 
org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304)
     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
     at 
org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
     at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
     at java.lang.Class.getDeclaredMethod(Class.java:2130)
     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
     ... 18 more
{code}

 

FYI, we don't have encryption enabled. Let me know if you need more info about 
our setup.


was (Author: timoha):
Hello, we are running HBase 2.0.1 with official Hadoop 2.8.4 jars and hadoop 
2.8.4 client 
([http://central.maven.org/maven2/org/apache/hadoop/hadoop-client/2.8.4/]). Got 
the following exception on regionserver which brings it down:


{{ 18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was thrown by 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()}}
{{ java.lang.Error: Couldn't properly initialize access to HDFS internals. 
Please update your WAL Provider to not make use of the 'asyncfs' provider. See 
HBASE-16110 for more information.}}
{{     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)}}
{{     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)}}
{{     at 
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)}}
{{     at 

[jira] [Updated] (HBASE-20834) The jenkins on http://104.198.223.121:8080/job/HBASE-Flaky-Tests/ is broken

2018-07-02 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20834:

Component/s: test
 community

> The jenkins on http://104.198.223.121:8080/job/HBASE-Flaky-Tests/ is broken
> ---
>
> Key: HBASE-20834
> URL: https://issues.apache.org/jira/browse/HBASE-20834
> Project: HBase
>  Issue Type: Bug
>  Components: community, test
>Reporter: Duo Zhang
>Priority: Major
>
> It is used by our flakey test finder to collect flakey tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20331) clean up shaded packaging for 2.1

2018-07-02 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529996#comment-16529996
 ] 

Sean Busbey commented on HBASE-20331:
-

only the docs left to do, but I'm caught up in a $dayjob thing at the moment. 
fine to bump out the resolution, IMO. easy enough to add docs in a 2.1.1 
release.

> clean up shaded packaging for 2.1
> -
>
> Key: HBASE-20331
> URL: https://issues.apache.org/jira/browse/HBASE-20331
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client, mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.2.0
>
>
> polishing pass on shaded modules for 2.0 based on trying to use them in more 
> contexts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-07-02 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529939#comment-16529939
 ] 

Sean Busbey commented on HBASE-20691:
-

{code}
482 } catch (IOException e) {
483   // should never arrive here
484 }
{code}

This needs a log message. since we expect it to never happen, probably at WARN 
or ERROR.

{code}
try {
366   FSUtils.setStoragePolicy(testFs, new Path("non-exist"), "HOT", 
true);
367   Assert.fail("Should have invoked the FS API but haven't");
368 } catch (IOException e) {
369   // expected given an invalid path
370 }
{code}

Aren't we expecting an IOException because the storage policy setting is a 
legit one and so it goes to the test FS implementation? If we can also get an 
IOException due to a bad path, then we need some way to differentiate which is 
happening.

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch, HBASE-20691.v5.patch, 
> HBASE-20691.v6.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20502) Document HBase incompatible with Yarn 2.9.0 and 3.0.x due to YARN-7190

2018-06-27 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525568#comment-16525568
 ] 

Sean Busbey commented on HBASE-20502:
-

I'd like to get this update in place. 3.0.3 is now out. The release notes claim 
that YARN-7190 is not included. [~jojochuang] or [~yzhangal] can y'all confirm 
one way or another and if we're not going to blacklist all of 3.0.z make sure 
there's a blocker on the next point release that gets it included?

> Document HBase incompatible with Yarn 2.9.0 and 3.0.x due to YARN-7190
> --
>
> Key: HBASE-20502
> URL: https://issues.apache.org/jira/browse/HBASE-20502
> Project: HBase
>  Issue Type: Sub-task
>  Components: dependencies, documentation
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-20502.patch
>
>
> We need to call out hadoop-yarn 2.9.0 and the entire 3.0.x line as explicitly 
> unsupported due to needing YARN-7190 fixed in versions that have ATS 
> available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-6028) Implement a cancel for in-progress compactions

2018-06-26 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524372#comment-16524372
 ] 

Sean Busbey commented on HBASE-6028:


possible places:

http://hbase.apache.org/book.html#ops.regionmgt.majorcompact

http://hbase.apache.org/book.html#compaction

> Implement a cancel for in-progress compactions
> --
>
> Key: HBASE-6028
> URL: https://issues.apache.org/jira/browse/HBASE-6028
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Derek Wollenstein
>Assignee: Mohit Goel
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-6028.master.006.patch
>
>
> Depending on current server load, it can be extremely expensive to run 
> periodic minor / major compactions.  It would be helpful to have a feature 
> where a user could use the shell or a client tool to explicitly cancel an 
> in-progress compactions.  This would allow a system to recover when too many 
> regions became eligible for compactions at once



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-6028) Implement a cancel for in-progress compactions

2018-06-26 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-6028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524364#comment-16524364
 ] 

Sean Busbey commented on HBASE-6028:


please add docs.

> Implement a cancel for in-progress compactions
> --
>
> Key: HBASE-6028
> URL: https://issues.apache.org/jira/browse/HBASE-6028
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: Derek Wollenstein
>Assignee: Mohit Goel
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-6028.master.006.patch
>
>
> Depending on current server load, it can be extremely expensive to run 
> periodic minor / major compactions.  It would be helpful to have a feature 
> where a user could use the shell or a client tool to explicitly cancel an 
> in-progress compactions.  This would allow a system to recover when too many 
> regions became eligible for compactions at once



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20553) Add dependency CVE checking to nightly tests

2018-06-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522413#comment-16522413
 ] 

Sean Busbey commented on HBASE-20553:
-

sure. if you've got it working locally, please post a patch and I'll make a 
branch based on this issue. the patch will need to include the jira name and 
the branch name, so it'll have something terrible like 
"HBASE-20553-HBASE-20553.v0.patch" where the "v0" part increments as you make 
changes.

> Add dependency CVE checking to nightly tests
> 
>
> Key: HBASE-20553
> URL: https://issues.apache.org/jira/browse/HBASE-20553
> Project: HBase
>  Issue Type: Umbrella
>  Components: dependencies
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
>
> We should proactively work to flag dependencies with known CVEs so that we 
> can then update them early in our development instead of near a release.
> YETUS-441 is working to add a plugin for this, we should grab a copy early to 
> make sure it works for us.
> Rough outline:
> 1. [install yetus locally|http://yetus.apache.org/downloads/]
> 2. [install the dependency-check 
> cli|https://www.owasp.org/index.php/OWASP_Dependency_Check] (homebrew 
> instructions on right hand margin)
> 3. Get a local copy of the OWASP datafile ({{dependency-check --updateonly 
> --data /some/local/path/to/dir}})
> 4. Run {{hbase_nightly_yetus.sh}} using matching environment variables from 
> the “yetus general check” (currently [line #126 in our nightly 
> Jenkinsfile|https://github.com/apache/hbase/blob/master/dev-support/Jenkinsfile#L126])
> 5. Grab the plugin definition and suppression file from from YETUS-441
> 6. put the plugin definition either in a directory of dev-support or into the 
> hbase-personality.sh directly
> 7. Re-run {{hbase_nightly_yetus.sh}} to verify that the plugin results show 
> up. (Probably this will involve adding new pointers for “where is the 
> suppression file”, “where is the OWASP datafile” and pointing them somewhere 
> locally.)
> Once all of that is in place we’ll get the changes needed into a branch that 
> we can test out. Over in YETUS-441 I’ll need to add a jenkins job that’ll 
> handle periodically updating a copy of the datafile for the OWASP dependency 
> checker. Presuming I have that in place by the time we have a nightly branch 
> to check this out, then we’ll also need to update our nightly Jenkinsfile to 
> fetch the data file from that job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-25 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16522348#comment-16522348
 ] 

Sean Busbey commented on HBASE-20691:
-

{quote}
Is it too tight coupling to do something like...

{code}
static void setStoragePolicy(final FileSystem fs, final Path path, boolean 
throwException) throws IOException {
final String storagePolicy = fs.getConf().get(HConstants.WAL_STORAGE_POLICY, 
HConstants.DEFAULT_WAL_STORAGE_POLICY);
{code}
Can see argument going either way, probably personal preference at that point.
{quote}

Too tight. Doesn't the per-CF storage policy code use this same method?

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch, HBASE-20691.v5.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519691#comment-16519691
 ] 

Sean Busbey commented on HBASE-20691:
-

{code}
486 if (storagePolicy.equals(HConstants.DEFAULT_WAL_STORAGE_POLICY)) {
487   if (LOG.isTraceEnabled()) {
488 LOG.trace("default policy of " + storagePolicy + " requested, 
exiting early.");
489   }
490   return;
491 }
{code}

This check should be against DEFER_TO_HDFS_STORAGE_POLICY instead.

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519687#comment-16519687
 ] 

Sean Busbey commented on HBASE-20691:
-

{quote}
Ah I see, the test case here simply tries to prove the HDFS api won't be called 
if we tries to set the storage policy to default, and vice versa. Please check 
the new patch and I think it will be much more clear. Please note that the 
IOException thrown will be caught and logged as a warning like below (I guess 
you ignored the UT result I pasted above sir, so allow me to repeat):

{code}
2018-06-08 22:59:39,063 WARN  [Time-limited test] util.CommonFSUtils(572): 
Unable to set storagePolicy=HOT for path=non-exist. DEBUG log level might have 
more details.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:563)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:524)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:484)
at 
org.apache.hadoop.hbase.util.TestFSUtils.verifyNoHDFSApiInvocationForDefaultPolicy(TestFSUtils.java:356)
at 
org.apache.hadoop.hbase.util.TestFSUtils.testSetStoragePolicyDefault(TestFSUtils.java:341)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.io.IOException: The setStoragePolicy method is invoked 
unexpectedly
at 
org.apache.hadoop.hbase.util.TestFSUtils$AlwaysFailSetStoragePolicyFileSystem.setStoragePolicy(TestFSUtils.java:364)
... 30 more
{code}
{quote}

Oh I see. We need the unit test to fail if the call goes through. Could we 
refactor CommonFSUtils to have a package-private method that allows 
IOExceptions out, have the public access method wrap the new method to do the 
catch/logging, and then have the test use the one that throws?

If the unit test can't fail it will have very limited utility; the vast 
majority of folks aren't going to examine log output.

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch, HBASE-20691.v4.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518968#comment-16518968
 ] 

Sean Busbey edited comment on HBASE-20691 at 6/21/18 6:15 AM:
--

{quote}
bq. CommonFSUtils should be updated to check against 
DEFER_TO_HDFS_STORAGE_POLICY

It should check against the default storage policy and let through if so. 
Currently the default policy is set to "NONE", and we give it a comprehensive 
constant name DEFER_TO_HDFS_STORAGE_POLICY
{quote}

Right now the code in CommonFSUtils just checks against the default passed into 
the call to {{setStoragePolicy}}, not against any constant. That's incorrect. 
E.g. if someone calls {{CommonFSUtils.setStoragePolicy(fs, conf, 
"get.this.storage.policy.key", "HOT")}} then if "get.this.storage.policy.key" 
isn't set it should a) use the value "HOT" and b) actually pass that value to 
the FileSystem implementation. When checking against a constant, the one it 
should check is DEFER_TO_HDFS_STORAGE_POLICY because changing our default 
policy for WALs shouldn't change which values are passed through to the 
FileSystem instance.

{quote}
bq. the test should be in TestCommonFSUtils

Agree that they should be. Checking commit history, the set-storage related 
test cases were added in TestFSUtils by HBASE-13498 and somehow left there 
during the code refactor in HBASE-18784... How about we open a follow-on JIRA 
to move all set-storage test cases into TestCommonFSUtils after this one?
{quote}

Sure. please link it here.

{quote}
bq. Shouldn't this second invocation have thrown an IOException?

Personally I think it's OK to let it fail silently with some warning log if the 
given policy is invalid or the set policy attempt failed, as the current 
implementation does. Throwing an IOE and cause region fail to open is too much 
IMHO.
{quote}

I don't mean in the region server, I mean just here in this test. the second 
call effectively uses "HOT" as the storage policy. That's a policy that we 
should give to the underlying FileSystem. The call in the test passes 
{{testFs}} as the FileSystem instance, which is an instance of the newly added 
"throw an IOException if anyone calls setStoragePolicy" FileSystem. If the use 
doesn't throw an exception then either a) CommonFSUtils isn't passing values to 
the FileSystem it should or b) the newly added FileSystem doesn't actually 
throw and exception when called.

if the problem is a) then we haven't solved part of the problem this jira 
addresses. if the problem is b) then we also don't have confirmation that 
CommonFSUtils didn't pass the "NONE" value along to the FileSystem 
implementation.


was (Author: busbey):
{quote}
bq. CommonFSUtils should be updated to check against 
DEFER_TO_HDFS_STORAGE_POLICY

It should check against the default storage policy and let through if so. 
Currently the default policy is set to "NONE", and we give it a comprehensive 
constant name DEFER_TO_HDFS_STORAGE_POLICY
{quote}

Right now the code in CommonFSUtils just checks against the default passed into 
the call to {{setStoragePolicy}}, not against any constant. That's incorrect. 
E.g. if someone calls {{CommonFSUtils.setStoragePolicy(fs, conf, 
"get.this.storage.policy.key", "HOT")}} then if "get.this.storage.policy.key" 
isn't set it should a) use the value "HOT" and b) actually pass that value to 
the FileSystem implementation. When checking against a constant, the one it 
should check is DEFER_TO_HDFS_STORAGE_POLICY because changing our default 
policy for WALs shouldn't change which values are passed through to the 
FileSystem instance.

{quote}
bq. the test should be in TestCommonFSUtils

Agree that they should be. Checking commit history, the set-storage related 
test cases were added in TestFSUtils by HBASE-13498 and somehow left there 
during the code refactor in HBASE-18784... How about we open a follow-on JIRA 
to move all set-storage test cases into TestCommonFSUtils after this one?
{quote}

Sure. please link it here.

{quote}
bq. Shouldn't this second invocation have thrown an IOException?

Personally I think it's OK to let it fail silently with some warning log if the 
given policy is invalid or the set policy attempt failed, as the current 
implementation does. Throwing an IOE and cause region fail to open is too much 
IMHO.
{quote}

I don't mean in the region server, I mean just here in this test. the second 
call effectively uses "HOT" as the storage policy. That's a policy that we 
should give to the underlying FileSystem. The call in the test passes 
{{testFs}} as the FileSystem instance, which is an instance of the newly added 
"throw and IOException if anyone calls setStoragePolicy" FileSystem. If the use 
doesn't throw an exception then either a) CommonFSUtils isn't passing values to 
the FileSystem it should or b) the newly added FileSystem doesn't actually 
throw and exception when called.

if 

[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-21 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518968#comment-16518968
 ] 

Sean Busbey commented on HBASE-20691:
-

{quote}
bq. CommonFSUtils should be updated to check against 
DEFER_TO_HDFS_STORAGE_POLICY

It should check against the default storage policy and let through if so. 
Currently the default policy is set to "NONE", and we give it a comprehensive 
constant name DEFER_TO_HDFS_STORAGE_POLICY
{quote}

Right now the code in CommonFSUtils just checks against the default passed into 
the call to {{setStoragePolicy}}, not against any constant. That's incorrect. 
E.g. if someone calls {{CommonFSUtils.setStoragePolicy(fs, conf, 
"get.this.storage.policy.key", "HOT")}} then if "get.this.storage.policy.key" 
isn't set it should a) use the value "HOT" and b) actually pass that value to 
the FileSystem implementation. When checking against a constant, the one it 
should check is DEFER_TO_HDFS_STORAGE_POLICY because changing our default 
policy for WALs shouldn't change which values are passed through to the 
FileSystem instance.

{quote}
bq. the test should be in TestCommonFSUtils

Agree that they should be. Checking commit history, the set-storage related 
test cases were added in TestFSUtils by HBASE-13498 and somehow left there 
during the code refactor in HBASE-18784... How about we open a follow-on JIRA 
to move all set-storage test cases into TestCommonFSUtils after this one?
{quote}

Sure. please link it here.

{quote}
bq. Shouldn't this second invocation have thrown an IOException?

Personally I think it's OK to let it fail silently with some warning log if the 
given policy is invalid or the set policy attempt failed, as the current 
implementation does. Throwing an IOE and cause region fail to open is too much 
IMHO.
{quote}

I don't mean in the region server, I mean just here in this test. the second 
call effectively uses "HOT" as the storage policy. That's a policy that we 
should give to the underlying FileSystem. The call in the test passes 
{{testFs}} as the FileSystem instance, which is an instance of the newly added 
"throw and IOException if anyone calls setStoragePolicy" FileSystem. If the use 
doesn't throw an exception then either a) CommonFSUtils isn't passing values to 
the FileSystem it should or b) the newly added FileSystem doesn't actually 
throw and exception when called.

if the problem is a) then we haven't solved part of the problem this jira 
addresses. if the problem is b) then we also don't have confirmation that 
CommonFSUtils didn't pass the "NONE" value along to the FileSystem 
implementation.

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20764) build broken when latest commit is gpg signed

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518675#comment-16518675
 ] 

Sean Busbey commented on HBASE-20764:
-

so to test this I'll need the change and a signed commit, anything else?

> build broken when latest commit is gpg signed
> -
>
> Key: HBASE-20764
> URL: https://issues.apache.org/jira/browse/HBASE-20764
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-20764.patch
>
>
> I broke the build by digitally signing a commit:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile 
> (default-compile) on project hbase-common: Compilation failure: Compilation 
> failure:
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[11,41]
>  unclosed string literal
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[12,4]
>   expected
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[12,30]
>  ';' expected
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[12,35]
>  malformed floating point literal
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[13,4]
>  ';' expected
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[13,20]
>  ';' expected
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[13,25]
>   expected
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[13,76]
>  illegal start of type
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[13,85]
>  ';' expected
> [ERROR] 
> /Users/mdrob/IdeaProjects/hbase/hbase-common/target/generated-sources/java/org/apache/hadoop/hbase/Version.java:[14,41]
>  unclosed string literal
> {noformat}
> Which complains because:
> {code}
>   public static final String revision = "gpg: Signature made Wed Jun 20 
> 09:42:38 2018 PDT
> gpg:using RSA key 86EDB9C33B8517228E88A8F93E48C0C6EF362B9E
> gpg: Good signature from "Mike Drob (CODE SIGNING KEY) " 
> [ultimate]
> d1cad1a25432ffcd75cd654e9bf68233ca7e1957";
> {code}
> And this comes from {{src/saveVersion.sh}} where it does:
> {noformat}
>   revision=`git log -1 --pretty=format:"%H"`
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518517#comment-16518517
 ] 

Sean Busbey commented on HBASE-15887:
-

(for context, Mac OS X users are stuck with Bash v3)

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518513#comment-16518513
 ] 

Sean Busbey commented on HBASE-15887:
-

{code}
363   # create a data structure akin to:
364   # entries='( [error]="0" [error_remove]="0" [error_add]="0"
365   #[debug]="3" [debug_remove]="1" [debug_add]="2" )'
366   entries+=( [${level}_${action% *}]=$(${GREP} -e "^${action#* 
}.*LOG.${level}" "${patchfile}" | wc -l)
367  [${level}]+=${entries[${level}_${action% *}]} )
{code}

Use of associative arrays means we'll need Bash 4. IIRC everything we have to 
date works with Bash 3. Can we do this without associative arrays? we'll need 
to figure out a way to only include it when bash is v4 if not.

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518510#comment-16518510
 ] 

Sean Busbey commented on HBASE-15887:
-

oh no it doesn't. I've got it backwards. The check here is "do we need to run 
this test?" so "0" means "yes" and "1" means "no". The contents of the if block 
is "we don't need the test, exit early."

so that block should be
{code}
if ! verify_needed_test hbaselogs; then
  return 0
fi
{code}

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518507#comment-16518507
 ] 

Sean Busbey commented on HBASE-15887:
-

sorry, missed this bit:
{quote}
I needed to change line 353:
{code}
verify_needed_test hbaselogs
if [[ $? == 0 ]]; then
{code}
To:
{code}
if [[ $? == 1 ]]; then
{code}
In order for the hbaselogs check to run. I suspect I am missing how to tell 
Yetus to run things properly but this inequality seemingly set to not run a 
test if it is verified needed is how the other tests are implemented.
{quote}

The [docs for 
verify_needed_test|http://yetus.apache.org/documentation/0.7.0/precommit-apidocs/core/#verify_needed_test]
 definitely says 0 should be correct. lemme dig in a bit.

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518506#comment-16518506
 ] 

Sean Busbey commented on HBASE-15887:
-

{quote}
Lastly, I do not print out the log lines as I am doing a rather crude grep for 
LOG. entries right now which look pretty gnarly. However, I would like 
to see this go to Yetus and use something like Eclipse's AST support to 
properly find log entry parameters and calls.
{quote}

We could also switch to pre/post  checking of the files that are marked as 
changing in the patch. then the grep would result in getting file names and 
lines. I don't think that's needed for this to be incrementally useful though. 
Maybe though? I feel like every time we add a test we get to "this should log a 
file" and then "that file should be in the qabot footer" in short order. How 
much time are you up for putting into this up front vs later down the line as 
follow-ons?

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518504#comment-16518504
 ] 

Sean Busbey commented on HBASE-15887:
-

{code}
352   verify_needed_test hbaselogs
353   if [[ $? == 0 ]]; then
354 return 0
355   fi
{code}

this should just check the function directly, i.e.
{code}
  if verify_needed_test hbaselogs; then
{code}

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-15887:
---

Assignee: Clay B.

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15887) Report Log Additions and Removals in Builds

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518498#comment-16518498
 ] 

Sean Busbey commented on HBASE-15887:
-

Generally I like starting stuff here in HBase and then moving it to Yetus once 
we get to kick the tires a bit.

Let me make sure I understand the test rationale. The test will essentially be 
indifferent to "logs didn't appear to change" and will approve of "logs 
changed" wether they were additions or removals?

> Report Log Additions and Removals in Builds
> ---
>
> Key: HBASE-15887
> URL: https://issues.apache.org/jira/browse/HBASE-15887
> Project: HBase
>  Issue Type: New Feature
>  Components: build
>Reporter: Clay B.
>Priority: Trivial
> Attachments: HBASE-15887-v1.txt
>
>
> It would be very nice for the Apache Yetus verifications of HBase patches to 
> report log item addition and deletions.
> This is not my idea but [~mbm] asked if we could modify the personality for 
> reporting log additions and removals yesterday at an [HBase meetup at Splice 
> machine|http://www.meetup.com/hbaseusergroup/events/230547750/] as [~aw] 
> presented Apache Yetus for building HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20762) precommit should archive generated LICENSE file

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518460#comment-16518460
 ] 

Sean Busbey commented on HBASE-20762:
-

could you point me at such a failed build? I just want to confirm we're not 
getting the file but in a confusing place.

> precommit should archive generated LICENSE file
> ---
>
> Key: HBASE-20762
> URL: https://issues.apache.org/jira/browse/HBASE-20762
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Mike Drob
>Priority: Major
>
> When a precommit run fails due to license issues, we get pointed to a file in 
> our maven logs:
> {noformat}
> /testptch/hbase/hbase-assembly/target/maven-shared-archive-resources/META-INF/LICENSE
> {noformat}
> But we don't have that file saved, so we don't know what the actual failure 
> was. So we should save that in our build artifacts. Or maybe we can print a 
> snippet from that file directly into the maven log. Both would be acceptable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20762) precommit should archive generated LICENSE file

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518459#comment-16518459
 ] 

Sean Busbey commented on HBASE-20762:
-

our jenkins job tries to do this now, here's our Archive artifacts:

{code}
patchprocess/*, patchprocess/**/*,**/LICENSE,**/NOTICE
{code}

Probably we should change the list of things we tell yetus to save. YETUS_ARGS 
parameter has {{--archive-list=rat.txt}} and we should update that to include 
LICENSE and NOTICE.

> precommit should archive generated LICENSE file
> ---
>
> Key: HBASE-20762
> URL: https://issues.apache.org/jira/browse/HBASE-20762
> Project: HBase
>  Issue Type: Bug
>  Components: build, community
>Reporter: Mike Drob
>Priority: Major
>
> When a precommit run fails due to license issues, we get pointed to a file in 
> our maven logs:
> {noformat}
> /testptch/hbase/hbase-assembly/target/maven-shared-archive-resources/META-INF/LICENSE
> {noformat}
> But we don't have that file saved, so we don't know what the actual failure 
> was. So we should save that in our build artifacts. Or maybe we can print a 
> snippet from that file directly into the maven log. Both would be acceptable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20760) Make sure downloads page lists everything in our download area

2018-06-20 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20760:

Labels: beginner  (was: )

> Make sure downloads page lists everything in our download area
> --
>
> Key: HBASE-20760
> URL: https://issues.apache.org/jira/browse/HBASE-20760
> Project: HBase
>  Issue Type: Task
>  Components: community, website
>Reporter: Sean Busbey
>Priority: Major
>  Labels: beginner
>
> our website has a "downloads" page now, but it only lists 2.x releases. it 
> should have any non-EOM release lines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20760) Make sure downloads page lists everything in our download area

2018-06-20 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20760:
---

 Summary: Make sure downloads page lists everything in our download 
area
 Key: HBASE-20760
 URL: https://issues.apache.org/jira/browse/HBASE-20760
 Project: HBase
  Issue Type: Task
  Components: community, website
Reporter: Sean Busbey


our website has a "downloads" page now, but it only lists 2.x releases. it 
should have any non-EOM release lines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20759) Please use HTTPS for KEYS

2018-06-20 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518301#comment-16518301
 ] 

Sean Busbey commented on HBASE-20759:
-

+1

> Please use HTTPS for KEYS
> -
>
> Key: HBASE-20759
> URL: https://issues.apache.org/jira/browse/HBASE-20759
> Project: HBase
>  Issue Type: Bug
>  Components: community, security, website
>Reporter: Sebb
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-20759.master.001.patch
>
>
> Please use HTTPS for the link to KEYS on  download page(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20720) Make 2.0.1 release

2018-06-20 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20720:

Summary: Make 2.0.1 release  (was: Make 2.0.1RC0)

> Make 2.0.1 release
> --
>
> Key: HBASE-20720
> URL: https://issues.apache.org/jira/browse/HBASE-20720
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
>
> Let me push out a release off branch-2.0, a 2.0.1. It has a bunch of fixes 
> and some perf improvements. A nightly run just passed clean: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2.0/421/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20720) Make 2.0.1RC0

2018-06-20 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20720:

Component/s: community

> Make 2.0.1RC0
> -
>
> Key: HBASE-20720
> URL: https://issues.apache.org/jira/browse/HBASE-20720
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
>
> Let me push out a release off branch-2.0, a 2.0.1. It has a bunch of fixes 
> and some perf improvements. A nightly run just passed clean: 
> https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-2.0/421/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20759) Please use HTTPS for KEYS

2018-06-20 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20759:

Component/s: website
 security
 community

> Please use HTTPS for KEYS
> -
>
> Key: HBASE-20759
> URL: https://issues.apache.org/jira/browse/HBASE-20759
> Project: HBase
>  Issue Type: Bug
>  Components: community, security, website
>Reporter: Sebb
>Priority: Major
>
> Please use HTTPS for the link to KEYS on  download page(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20757) refactor ref guide text to remove EOM versions

2018-06-20 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20757:
---

 Summary: refactor ref guide text to remove EOM versions
 Key: HBASE-20757
 URL: https://issues.apache.org/jira/browse/HBASE-20757
 Project: HBase
  Issue Type: Task
  Components: community, documentation
Reporter: Sean Busbey


we still have references in the guide going back as far as version 0.90. remove 
them, keep any points that are still relevant to current versions.

e.g. this bit should just be removed:

{code}
NOTE: online schema changes are supported in the 0.92.x codebase, but the 
0.90.x codebase requires the table to be disabled.
{code}

but this one needs to be rewritten:

{code}
The recommended approach is to let HBase add its dependency jars and use 
`HADOOP_CLASSPATH` or `-libjars`.

Since HBase `0.90.x`, HBase adds its dependency JARs to the job configuration 
itself.
The dependencies only need to be available on the local `CLASSPATH` and from 
here they'll be picked
up and bundled into the fat job jar deployed to the MapReduce cluster. A basic 
trick just passes
{code}

to essentially just say "HBase adds its dependency jars to the job 
configuration itself."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20615) emphasize use of shaded client jars when they're present in an install

2018-06-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20615:

Release Note: 


HBase's built in scripts now rely on the downstream facing shaded artifacts 
where possible. In particular interest to downstream users, the `hbase 
classpath` and `hbase mapredcp` commands now return the relevant shaded client 
artifact and only those third paty jars needed to make use of them (e.g. 
slf4j-api, commons-logging, htrace, etc).

Downstream users should note that by default the `hbase classpath` command will 
treat having `hadoop` on the shell's PATH as an implicit request to include the 
output of the `hadoop classpath` command in the returned classpath. This 
long-existing behavior can be opted out of by setting the environment variable 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP` to the value "true". For example: 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true" bin/hbase classpath`.

  was:


HBase's built in scripts now rely on the downstream facing shaded artifacts 
where possible. In particular interest to downstream users, the `hadoop 
classpath` and `hadoop mapredcp` commands now return the relevant shaded client 
artifact and only those third paty jars needed to make use of them (e.g. 
slf4j-api, commons-logging, htrace, etc).

Downstream users should note that by default the `hbase classpath` command will 
treat having `hadoop` on the shell's PATH as an implicit request to include the 
output of the `hadoop classpath` command in the returned classpath. This 
long-existing behavior can be opted out of by setting the environment variable 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP` to the value "true". For example: 
`HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true" bin/hbase classpath`.


> emphasize use of shaded client jars when they're present in an install
> --
>
> Key: HBASE-20615
> URL: https://issues.apache.org/jira/browse/HBASE-20615
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Client, Usability
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20615.0.patch, HBASE-20615.1.patch, 
> HBASE-20615.2.patch
>
>
> Working through setting up an IT for our shaded artifacts in HBASE-20334 
> makes our lack of packaging seem like an oversight. While I could work around 
> by pulling the shaded clients out of whatever build process built the 
> convenience binary that we're trying to test, it seems v awkward.
> After reflecting on it more, it makes more sense to me for there to be a 
> common place in the install that folks running jobs against the cluster can 
> rely on. If they need to run without a full hbase install, that should still 
> work fine via e.g. grabbing from the maven repo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20756) reference guide examples still contain references to EOM versions

2018-06-19 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20756:
---

 Summary: reference guide examples still contain references to EOM 
versions
 Key: HBASE-20756
 URL: https://issues.apache.org/jira/browse/HBASE-20756
 Project: HBase
  Issue Type: Bug
  Components: community, documentation
Reporter: Sean Busbey


the reference guide still has examples that refer to EOM versions. e.g. this 
shell output that has 0.98 in it:

{code}
$ echo "describe 'test1'" | ./hbase shell -n

Version 0.98.3-hadoop2, rd5e65a9144e315bb0a964e7730871af32f5018d5, Sat May 31 
19:56:09 PDT 2014

describe 'test1'

DESCRIPTION  ENABLED
 'test1', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NON true
 E', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0',
  VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIO
 NS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS =>
 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false'
 , BLOCKCACHE => 'true'}
1 row(s) in 3.2410 seconds
{code}

these should be redone with a current release. Ideally a version in the minor 
release line the docs are for, but even just updating to the stable pointer 
would be a big improvement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20755) quickstart note about Web UI port changes in ref guide is rendered incorrectly

2018-06-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20755:

Attachment: Untitled.png

> quickstart note about Web UI port changes in ref guide is rendered incorrectly
> --
>
> Key: HBASE-20755
> URL: https://issues.apache.org/jira/browse/HBASE-20755
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Sean Busbey
>Priority: Minor
> Attachments: Untitled.png
>
>
> The note in the quickstart guide about how the web ui ports changed only 
> renders the title as a note. the text is just a normal paragraph afterwards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20755) quickstart note about Web UI port changes in ref guide is rendered incorrectly

2018-06-19 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20755:
---

 Summary: quickstart note about Web UI port changes in ref guide is 
rendered incorrectly
 Key: HBASE-20755
 URL: https://issues.apache.org/jira/browse/HBASE-20755
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Sean Busbey


The note in the quickstart guide about how the web ui ports changed only 
renders the title as a note. the text is just a normal paragraph afterwards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20753) reference guide should direct security related issues to priv...@hbase.apache.org

2018-06-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20753:

Component/s: security

> reference guide should direct security related issues to 
> priv...@hbase.apache.org
> -
>
> Key: HBASE-20753
> URL: https://issues.apache.org/jira/browse/HBASE-20753
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, security
>Reporter: Sean Busbey
>Priority: Critical
>  Labels: beginner
>
> the reference guide currently directs folks to send security issues to 
> priv...@apache.org:
> {quote}
> To protect existing HBase installations from new vulnerabilities, please do 
> not use JIRA to report security-related bugs. Instead, send your report to 
> the mailing list priv...@apache.org, which allows anyone to send messages, 
> but restricts who can read them. Someone on that list will contact you to 
> follow up on your report.
> {quote}
> This address does not exist. It should tell folks to send the email to 
> priv...@hbase.apache.org.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20754) quickstart guide should instruct folks to set JAVA_HOME to a JDK installation.

2018-06-19 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20754:
---

 Summary: quickstart guide should instruct folks to set JAVA_HOME 
to a JDK installation.
 Key: HBASE-20754
 URL: https://issues.apache.org/jira/browse/HBASE-20754
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Sean Busbey


The quickstart guide currently instructs folks to set JAVA_HOME, but to the 
wrong place

{code}
The JAVA_HOME variable should be set to a directory which contains the 
executable file bin/java. Most modern Linux operating systems provide a 
mechanism, such as /usr/bin/alternatives on RHEL or CentOS, for transparently 
switching between versions of executables such as Java. In this case, you can 
set JAVA_HOME to the directory containing the symbolic link to bin/java, which 
is usually /usr.

JAVA_HOME=/usr
{code}

instead, it should tell folks to point it to a jdk installation and help them 
on how to find that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20753) reference guide should direct security related issues to priv...@hbase.apache.org

2018-06-19 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-20753:
---

 Summary: reference guide should direct security related issues to 
priv...@hbase.apache.org
 Key: HBASE-20753
 URL: https://issues.apache.org/jira/browse/HBASE-20753
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Sean Busbey


the reference guide currently directs folks to send security issues to 
priv...@apache.org:

{quote}
To protect existing HBase installations from new vulnerabilities, please do not 
use JIRA to report security-related bugs. Instead, send your report to the 
mailing list priv...@apache.org, which allows anyone to send messages, but 
restricts who can read them. Someone on that list will contact you to follow up 
on your report.
{quote}

This address does not exist. It should tell folks to send the email to 
priv...@hbase.apache.org.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20691) Storage policy should allow deferring to HDFS

2018-06-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517662#comment-16517662
 ] 

Sean Busbey commented on HBASE-20691:
-

CommonFSUtils should be updated to check against 
{{DEFER_TO_HDFS_STORAGE_POLICY}} rather than against a passed in default of 
{{DEFAULT_WAL_STORAGE_POLICY}}. They're the same thing now, but they won't 
necessarily be.

Since the {{setStoragePolicy}} code is in CommonFSUtils, the test should be in 
TestCommonFSUtils.

{code}

354 LOG.debug("Before set storage policy to NONE");
355 FSUtils.setStoragePolicy(testFs, conf, new Path("non-exist"), 
HConstants.WAL_STORAGE_POLICY,
356 HConstants.DEFAULT_WAL_STORAGE_POLICY);
357 LOG.debug("After set storage policy to NONE");
358 conf.set(HConstants.WAL_STORAGE_POLICY, "HOT");
359 // warning log is expected when passing some valid policy
360 FSUtils.setStoragePolicy(testFs, conf, new Path("non-exist"), 
HConstants.WAL_STORAGE_POLICY,
361 HConstants.DEFAULT_WAL_STORAGE_POLICY);
{code}

Shouldn't this second invocation have thrown an IOException?

> Storage policy should allow deferring to HDFS
> -
>
> Key: HBASE-20691
> URL: https://issues.apache.org/jira/browse/HBASE-20691
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Affects Versions: 1.5.0, 2.0.0
>Reporter: Sean Busbey
>Assignee: Yu Li
>Priority: Minor
> Fix For: 2.1.0, 1.5.0
>
> Attachments: HBASE-20691.patch, HBASE-20691.v2.patch, 
> HBASE-20691.v3.patch
>
>
> In HBase 1.1 - 1.4 we can defer storage policy decisions to HDFS by using 
> "NONE" as the storage policy in hbase configs.
> As described on this [dev@hbase thread "WAL storage policies and interactions 
> with Hadoop admin 
> tools."|https://lists.apache.org/thread.html/d220726fab4bb4c9e117ecc8f44246402dd97bfc986a57eb2237@%3Cdev.hbase.apache.org%3E]
>  we no longer have that option in 2.0.0 and 1.5.0 (as the branch is now). 
> Additionally, we can't set the policy to HOT in the event that HDFS has 
> changed the policy for a parent directory of our WALs.
> We should put back that ability. Presuming this is done by re-adopting the 
> "NONE" placeholder variable, we need to ensure that value doesn't get passed 
> to HDFS APIs. Since it isn't a valid storage policy attempting to use it will 
> cause a bunch of logging churn (which will be a regression of the problem 
> HBASE-18118 sought to fix).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-06-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20448 started by Sean Busbey.
---
> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.1.0
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-06-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20448:
---

Assignee: Sean Busbey

> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.1.0
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20448) update ref guide to expressly use shaded clients for examples

2018-06-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20448:

Fix Version/s: 2.1.0

> update ref guide to expressly use shaded clients for examples
> -
>
> Key: HBASE-20448
> URL: https://issues.apache.org/jira/browse/HBASE-20448
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client, documentation, mapreduce
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 2.1.0
>
>
> the whole mapreduce section, especially, should be using the shaded version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-20331) clean up shaded packaging for 2.1

2018-06-19 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20331 started by Sean Busbey.
---
> clean up shaded packaging for 2.1
> -
>
> Key: HBASE-20331
> URL: https://issues.apache.org/jira/browse/HBASE-20331
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client, mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
>
> polishing pass on shaded modules for 2.0 based on trying to use them in more 
> contexts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20334) add a test that expressly uses both our shaded client and the one from hadoop 3

2018-06-18 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20334:

  Resolution: Fixed
Release Note: 


HBase now includes a helper script that can be used to run a basic 
functionality test for a given HBase installation at in `dev_support`. The test 
can optionally be given an HBase client artifact to rely on and can optionally 
be given specific Hadoop client artifacts to use.

For usage information see 
`./dev-support/hbase_nightly_pseudo-distributed-test.sh --help`.

The project nightly tests now make use of this test to check running on top of 
Hadoop 2, Hadoop 3, and Hadoop 3 with shaded client artifacts.
  Status: Resolved  (was: Patch Available)

> add a test that expressly uses both our shaded client and the one from hadoop 
> 3
> ---
>
> Key: HBASE-20334
> URL: https://issues.apache.org/jira/browse/HBASE-20334
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20334.0.patch, HBASE-20334.1.patch
>
>
> Since we're making a shaded client that bleed out of our namespace and into 
> Hadoop's, we should ensure that we can show our clients coexisting. Even if 
> it's just an IT that successfully talks to both us and HDFS via our 
> respective shaded clients, that'd be a big help in keeping us proactive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19735) Create a minimal "client" tarball installation

2018-06-18 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-19735:

  Resolution: Fixed
Release Note: 


The HBase convenience binary artifacts now includes a client focused tarball 
that a) includes more docs and b) does not include scripts or jars only needed 
for running HBase cluster services.

The new artifact is made as a normal part of the `assembly:single` maven 
command.
  Status: Resolved  (was: Patch Available)

> Create a minimal "client" tarball installation
> --
>
> Key: HBASE-19735
> URL: https://issues.apache.org/jira/browse/HBASE-19735
> Project: HBase
>  Issue Type: New Feature
>  Components: build, Client
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-19735.000.patch, HBASE-19735.001.branch-2.patch, 
> HBASE-19735.002.branch-2.patch, HBASE-19735.003.patch, HBASE-19735.004.patch
>
>
> We're moving ourselves towards more controlled dependencies. A logical next 
> step is to try to do the same for our "binary" artifacts that we create 
> during releases.
> There is code (our's and our dependency's) which the HMaster and RegionServer 
> require which, obviously, clients do not need.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   9   10   >