[jira] [Commented] (HBASE-28448) CompressionTest hangs when run over a Ozone ofs path

2024-03-21 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829708#comment-17829708
 ] 

Wei-Chiu Chuang commented on HBASE-28448:
-

Another bug is in side HBase CompressionTest: the test should close FileSystem 
object properly.

> CompressionTest hangs when run over a Ozone ofs path
> 
>
> Key: HBASE-28448
> URL: https://issues.apache.org/jira/browse/HBASE-28448
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: ozone
> Attachments: hbase_ozone_compression.jstack
>
>
> If we run the Compression test over HDFS path, it works fine:
> {code:java}
> hbase org.apache.hadoop.hbase.util.CompressionTest 
> hdfs://ns1/tmp/dir1/dir2/test_file.txt snappy
> 24/03/20 06:08:43 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 24/03/20 06:08:43 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/03/20 06:08:43 INFO impl.MetricsSystemImpl: HBase metrics system started
> 24/03/20 06:08:43 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 24/03/20 06:08:43 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:08:43 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:08:44 INFO compress.CodecPool: Got brand-new decompressor 
> [.snappy]
> SUCCESS {code}
> The command exits, but when the same is tried over a ofs path, the command 
> hangs.
> {code:java}
> hbase org.apache.hadoop.hbase.util.CompressionTest 
> ofs://ozone1710862004/test-222compression-vol/compression-buck2/test_file.txt 
> snappy
> 24/03/20 06:05:19 INFO protocolPB.OmTransportFactory: Loading OM transport 
> implementation 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransportFactory as specified 
> by configuration.
> 24/03/20 06:05:20 INFO client.ClientTrustManager: Loading certificates for 
> client.
> 24/03/20 06:05:20 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 24/03/20 06:05:20 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/03/20 06:05:20 INFO impl.MetricsSystemImpl: HBase metrics system started
> 24/03/20 06:05:20 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 24/03/20 06:05:20 INFO rpc.RpcClient: Creating Volume: 
> test-222compression-vol, with om as owner and space quota set to -1 bytes, 
> counts quota set to -1
> 24/03/20 06:05:20 INFO rpc.RpcClient: Creating Bucket: 
> test-222compression-vol/compression-buck2, with bucket layout 
> FILE_SYSTEM_OPTIMIZED, om as owner, Versioning false, Storage Type set to 
> DISK and Encryption set to false, Replication Type set to server-side default 
> replication type, Namespace Quota set to -1, Space Quota set to -1
> 24/03/20 06:05:21 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:05:21 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:05:21 WARN impl.MetricsSystemImpl: HBase metrics system already 
> initialized!
> 24/03/20 06:05:21 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.ratis.metrics.dropwizard3.Dm3MetricRegistriesImpl
> 24/03/20 06:05:22 INFO compress.CodecPool: Got brand-new decompressor 
> [.snappy]
> SUCCESS 
> .
> .
> .{code}
> The command doesnt exit.
> Attaching the jstack of the process below:
> [^hbase_ozone_compression.jstack]
> cc: [~weichiu] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28448) CompressionTest hangs when run over a Ozone ofs path

2024-03-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-28448:
---

Assignee: Wei-Chiu Chuang

> CompressionTest hangs when run over a Ozone ofs path
> 
>
> Key: HBASE-28448
> URL: https://issues.apache.org/jira/browse/HBASE-28448
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: ozone
> Attachments: hbase_ozone_compression.jstack
>
>
> If we run the Compression test over HDFS path, it works fine:
> {code:java}
> hbase org.apache.hadoop.hbase.util.CompressionTest 
> hdfs://ns1/tmp/dir1/dir2/test_file.txt snappy
> 24/03/20 06:08:43 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 24/03/20 06:08:43 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/03/20 06:08:43 INFO impl.MetricsSystemImpl: HBase metrics system started
> 24/03/20 06:08:43 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 24/03/20 06:08:43 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:08:43 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:08:44 INFO compress.CodecPool: Got brand-new decompressor 
> [.snappy]
> SUCCESS {code}
> The command exits, but when the same is tried over a ofs path, the command 
> hangs.
> {code:java}
> hbase org.apache.hadoop.hbase.util.CompressionTest 
> ofs://ozone1710862004/test-222compression-vol/compression-buck2/test_file.txt 
> snappy
> 24/03/20 06:05:19 INFO protocolPB.OmTransportFactory: Loading OM transport 
> implementation 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransportFactory as specified 
> by configuration.
> 24/03/20 06:05:20 INFO client.ClientTrustManager: Loading certificates for 
> client.
> 24/03/20 06:05:20 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 24/03/20 06:05:20 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/03/20 06:05:20 INFO impl.MetricsSystemImpl: HBase metrics system started
> 24/03/20 06:05:20 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 24/03/20 06:05:20 INFO rpc.RpcClient: Creating Volume: 
> test-222compression-vol, with om as owner and space quota set to -1 bytes, 
> counts quota set to -1
> 24/03/20 06:05:20 INFO rpc.RpcClient: Creating Bucket: 
> test-222compression-vol/compression-buck2, with bucket layout 
> FILE_SYSTEM_OPTIMIZED, om as owner, Versioning false, Storage Type set to 
> DISK and Encryption set to false, Replication Type set to server-side default 
> replication type, Namespace Quota set to -1, Space Quota set to -1
> 24/03/20 06:05:21 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:05:21 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:05:21 WARN impl.MetricsSystemImpl: HBase metrics system already 
> initialized!
> 24/03/20 06:05:21 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.ratis.metrics.dropwizard3.Dm3MetricRegistriesImpl
> 24/03/20 06:05:22 INFO compress.CodecPool: Got brand-new decompressor 
> [.snappy]
> SUCCESS 
> .
> .
> .{code}
> The command doesnt exit.
> Attaching the jstack of the process below:
> [^hbase_ozone_compression.jstack]
> cc: [~weichiu] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28454) Make Outputstream writeExecutor daemon threads

2024-03-21 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-28454:
---

 Summary: Make Outputstream writeExecutor daemon threads
 Key: HBASE-28454
 URL: https://issues.apache.org/jira/browse/HBASE-28454
 Project: HBase
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


Found via the HBase CompressionTest HBASE-28448.

We should consider making the threads daemon, not user thread. Otherwise 
process may hang.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28448) CompressionTest hangs when run over a Ozone ofs path

2024-03-21 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829707#comment-17829707
 ] 

Wei-Chiu Chuang commented on HBASE-28448:
-

Ok I found the problem. It's a bug in HBase CompressionTest but also a bug in 
Ozone. Will open PRs accordingly.

> CompressionTest hangs when run over a Ozone ofs path
> 
>
> Key: HBASE-28448
> URL: https://issues.apache.org/jira/browse/HBASE-28448
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Priority: Major
>  Labels: ozone
> Attachments: hbase_ozone_compression.jstack
>
>
> If we run the Compression test over HDFS path, it works fine:
> {code:java}
> hbase org.apache.hadoop.hbase.util.CompressionTest 
> hdfs://ns1/tmp/dir1/dir2/test_file.txt snappy
> 24/03/20 06:08:43 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 24/03/20 06:08:43 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/03/20 06:08:43 INFO impl.MetricsSystemImpl: HBase metrics system started
> 24/03/20 06:08:43 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 24/03/20 06:08:43 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:08:43 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:08:44 INFO compress.CodecPool: Got brand-new decompressor 
> [.snappy]
> SUCCESS {code}
> The command exits, but when the same is tried over a ofs path, the command 
> hangs.
> {code:java}
> hbase org.apache.hadoop.hbase.util.CompressionTest 
> ofs://ozone1710862004/test-222compression-vol/compression-buck2/test_file.txt 
> snappy
> 24/03/20 06:05:19 INFO protocolPB.OmTransportFactory: Loading OM transport 
> implementation 
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransportFactory as specified 
> by configuration.
> 24/03/20 06:05:20 INFO client.ClientTrustManager: Loading certificates for 
> client.
> 24/03/20 06:05:20 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-hbase.properties,hadoop-metrics2.properties
> 24/03/20 06:05:20 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/03/20 06:05:20 INFO impl.MetricsSystemImpl: HBase metrics system started
> 24/03/20 06:05:20 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
> 24/03/20 06:05:20 INFO rpc.RpcClient: Creating Volume: 
> test-222compression-vol, with om as owner and space quota set to -1 bytes, 
> counts quota set to -1
> 24/03/20 06:05:20 INFO rpc.RpcClient: Creating Bucket: 
> test-222compression-vol/compression-buck2, with bucket layout 
> FILE_SYSTEM_OPTIMIZED, om as owner, Versioning false, Storage Type set to 
> DISK and Encryption set to false, Replication Type set to server-side default 
> replication type, Namespace Quota set to -1, Space Quota set to -1
> 24/03/20 06:05:21 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:05:21 INFO compress.CodecPool: Got brand-new compressor [.snappy]
> 24/03/20 06:05:21 WARN impl.MetricsSystemImpl: HBase metrics system already 
> initialized!
> 24/03/20 06:05:21 INFO metrics.MetricRegistries: Loaded MetricRegistries 
> class org.apache.ratis.metrics.dropwizard3.Dm3MetricRegistriesImpl
> 24/03/20 06:05:22 INFO compress.CodecPool: Got brand-new decompressor 
> [.snappy]
> SUCCESS 
> .
> .
> .{code}
> The command doesnt exit.
> Attaching the jstack of the process below:
> [^hbase_ozone_compression.jstack]
> cc: [~weichiu] 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28446) Remove the reflection for ByteBufferPositionedReadable

2024-03-19 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828530#comment-17828530
 ] 

Wei-Chiu Chuang commented on HBASE-28446:
-

That's a really good point. Thanks [~stoty] I was just thinking that Hadoop 
3.1/3.2 was EOL.

> Remove the reflection for ByteBufferPositionedReadable
> --
>
> Key: HBASE-28446
> URL: https://issues.apache.org/jira/browse/HBASE-28446
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Minor
>
> HBASE-21946 used reflection to access the ByteBufferPositionedReadable API 
> that's only available in Hadoop 3.3.
> Now that HBase branch-2.6 and above updated hadoop three dependency to 3.3, 
> we can get rid of the reflection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-28446) Remove the reflection for ByteBufferPositionedReadable

2024-03-19 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-28446:
---

 Summary: Remove the reflection for ByteBufferPositionedReadable
 Key: HBASE-28446
 URL: https://issues.apache.org/jira/browse/HBASE-28446
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


HBASE-21946 used reflection to access the ByteBufferPositionedReadable API 
that's only available in Hadoop 3.3.

Now that HBase branch-2.6 and above updated hadoop three dependency to 3.3, we 
can get rid of the reflection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-28419) Allow Action and Policies of ServerKillingMonkey to be configurable

2024-03-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-28419.
-
Fix Version/s: 4.0.0-alpha-1
   Resolution: Fixed

> Allow Action and Policies of ServerKillingMonkey to be configurable
> ---
>
> Key: HBASE-28419
> URL: https://issues.apache.org/jira/browse/HBASE-28419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Pratyush Bhatt
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>
> Currently for ServerKillingMonkeyFactory, actions and policies have hardcoded 
> timeouts.
> {code:java}
>     Action[] actions1 = new Action[] {
>       new RestartRandomRsExceptMetaAction(6),
>       new RestartActiveMasterAction(5000),
>       // only allow 2 servers to be dead
>       new RollingBatchRestartRsAction(5000, 1.0f, 2, true),
>       new ForceBalancerAction(),
>       new GracefulRollingRestartRsAction(gracefulRollingRestartTSSLeepTime),
>       new RollingBatchSuspendResumeRsAction(rollingBatchSuspendRSSleepTime,
>           rollingBatchSuspendtRSRatio)
>     }; {code}
> and
> {code:java}
>     return new PolicyBasedChaosMonkey(properties, util,
>       new CompositeSequentialPolicy(new DoActionsOncePolicy(60 * 1000, 
> actions1),
>         new PeriodicRandomActionPolicy(60 * 1000, actions1)),
>       new PeriodicRandomActionPolicy(60 * 1000, actions2));
>   } {code}
> We should allow these to be configurable too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27982) Synchronous replication should check if the file system supports truncate API

2024-03-06 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-27982:
---

Assignee: Wei-Chiu Chuang

> Synchronous replication should check if the file system supports truncate API
> -
>
> Key: HBASE-27982
> URL: https://issues.apache.org/jira/browse/HBASE-27982
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: pull-request-available
>
> Ok. I missed this but I was just told that the synchronous replication 
> leverages the truncate() FS API.
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALManager.java#L282
> Ozone does not implement truncate so calling this method on the WAL FS will 
> result in an exception. It would be a better user experience to alert user 
> that this is not supported from the start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28419) Allow Action and Policies of ServerKillingMonkey to be configurable

2024-03-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-28419:
---

Assignee: Wei-Chiu Chuang

> Allow Action and Policies of ServerKillingMonkey to be configurable
> ---
>
> Key: HBASE-28419
> URL: https://issues.apache.org/jira/browse/HBASE-28419
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Reporter: Pratyush Bhatt
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> Currently for ServerKillingMonkeyFactory, actions and policies have hardcoded 
> timeouts.
> {code:java}
>     Action[] actions1 = new Action[] {
>       new RestartRandomRsExceptMetaAction(6),
>       new RestartActiveMasterAction(5000),
>       // only allow 2 servers to be dead
>       new RollingBatchRestartRsAction(5000, 1.0f, 2, true),
>       new ForceBalancerAction(),
>       new GracefulRollingRestartRsAction(gracefulRollingRestartTSSLeepTime),
>       new RollingBatchSuspendResumeRsAction(rollingBatchSuspendRSSleepTime,
>           rollingBatchSuspendtRSRatio)
>     }; {code}
> and
> {code:java}
>     return new PolicyBasedChaosMonkey(properties, util,
>       new CompositeSequentialPolicy(new DoActionsOncePolicy(60 * 1000, 
> actions1),
>         new PeriodicRandomActionPolicy(60 * 1000, actions1)),
>       new PeriodicRandomActionPolicy(60 * 1000, actions2));
>   } {code}
> We should allow these to be configurable too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-28242) Move ProfileServlet to support async-profiler 2.x only

2024-02-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-28242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-28242:
---

Assignee: Lars Francke

> Move ProfileServlet to support async-profiler 2.x only
> --
>
> Key: HBASE-28242
> URL: https://issues.apache.org/jira/browse/HBASE-28242
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Minor
>  Labels: pull-request-available
>
> async-profiler 2.0 has been out since March 2021 now.
> This issue/PR is about supporting its new features and removing the 1.x 
> functionality.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28216) HDFS erasure coding support for table data dirs

2023-12-12 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17795871#comment-17795871
 ] 

Wei-Chiu Chuang commented on HBASE-28216:
-

I think simpler is always better.

In your example, configuring EC policy at family level requires 200k HDFS 
updates which is going to cause a big disruption to your service for sure. I 
also don't see the use case to configure EC policy at family level which is too 
fine grained.

 

Table level EC policy seems like the way to go.

> HDFS erasure coding support for table data dirs
> ---
>
> Key: HBASE-28216
> URL: https://issues.apache.org/jira/browse/HBASE-28216
> Project: HBase
>  Issue Type: New Feature
>Reporter: Bryan Beaudreault
>Priority: Major
>
> [Erasure 
> coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html]
>  (EC) is a hadoop-3 feature which can drastically reduce storage 
> requirements, at the expense of locality. At my company we have a few hbase 
> clusters which are extremely data dense and take mostly write traffic, fewer 
> reads (cold data). We'd like to reduce the cost of these clusters, and EC is 
> a great way to do that since it can reduce replication related storage costs 
> by 50%.
> It's possible to enable EC policies on sub directories of HDFS. One can 
> manually set this with {{{}hdfs ec -setPolicy -path 
> /hbase/data/default/usertable -policy {}}}. This can work without any 
> hbase support.
> One problem with that is a lack of visibility by operators into which tables 
> might have EC enabled. I think this is where HBase can help. Here's my 
> proposal:
>  * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY
>  * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, 
> verify that the requested policy is available and enabled via 
> DistributedFileSystem.
> getErasureCodingPolicies().
>  * During ModifyTableProcedure, add a new state for 
> MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY.
>  ** When adding or changing a policy, use DistributedFileSystem.
> setErasureCodingPolicy to sync it for the data and archive dir of that table 
> (or column in table)
>  ** When removing the property or setting it to empty, use 
> DistributedFileSystem.
> unsetErasureCodingPolicy to remove it from the data and archive dir.
> Since this new API is in hadoop-3 only, we'll need to add a reflection 
> wrapper class for managing the calls and verifying that the API is available. 
> We'll similarly do that API check in preflightChecks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27769) Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for non-HDFS file system

2023-11-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-27769.
-
Resolution: Fixed

Pushed to branch HBASE-27740. Thanks [~taklwu] and [~wchevreuil]!

> Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for 
> non-HDFS file system
> -
>
> Key: HBASE-27769
> URL: https://issues.apache.org/jira/browse/HBASE-27769
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha-3, 2.4.16, 2.5.3
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
> Fix For: HBASE-27740
>
>
> after HADOOP-18671 , we will change the hbase-asyncfs to use use 
> hasPathCapability to support recoverLease, setSafeMode, isFileClosed for 
> non-HDFS file system instead of directly casting only HDFS in 
> RecoverLeaseFSUtils



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27769) Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for non-HDFS file system

2023-11-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-27769:

Fix Version/s: HBASE-27740

> Use hasPathCapability to support recoverLease, setSafeMode, isFileClosed for 
> non-HDFS file system
> -
>
> Key: HBASE-27769
> URL: https://issues.apache.org/jira/browse/HBASE-27769
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha-3, 2.4.16, 2.5.3
>Reporter: Tak-Lon (Stephen) Wu
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
> Fix For: HBASE-27740
>
>
> after HADOOP-18671 , we will change the hbase-asyncfs to use use 
> hasPathCapability to support recoverLease, setSafeMode, isFileClosed for 
> non-HDFS file system instead of directly casting only HDFS in 
> RecoverLeaseFSUtils



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HBASE-28216) HDFS erasure coding support for table data dirs

2023-11-22 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788857#comment-17788857
 ] 

Wei-Chiu Chuang edited comment on HBASE-28216 at 11/22/23 6:35 PM:
---

No that's fine. We're pursuing that in a separate branch HBASE-27740.
(reminder to myself: finish review HBASE-27769 today)

I was under the impression that setErasureCodingPolicy requires HDFS admin user 
privilege. But checking the code again looks like it requires just the write 
privilege of that directory.


We recently added EC support in Apache Impala (check out Cloudera doc 
https://docs.cloudera.com/cdw-runtime/1.5.1/impala-reference/topics/impala-ec-policies.html
 the doc talks about Ozone EC but it works the same way for HDFS EC) 
IMPALA-11476 but we did not add the support for Impala to update table EC 
properties. It would be interesting to start thinking about giving applications 
more control over EC policies.


was (Author: jojochuang):
No that's fine. We're pursuing that in a separate branch HBASE-27740.
(reminder to myself: finish review HBASE-27769 today)

I was under the impression that setErasureCodingPolicy requires HDFS admin user 
privilege. But checking the code again looks like it requires just the write 
privilege of that directory.


We recently added EC support in Apache Impala (check out Cloudera doc 
https://docs.cloudera.com/cdw-runtime/1.5.1/impala-reference/topics/impala-ec-policies.html
 the doc talks about Ozone EC but it works the same way for HDFS EC) but we did 
not add the support for Impala to update table EC properties.

> HDFS erasure coding support for table data dirs
> ---
>
> Key: HBASE-28216
> URL: https://issues.apache.org/jira/browse/HBASE-28216
> Project: HBase
>  Issue Type: New Feature
>Reporter: Bryan Beaudreault
>Priority: Major
>
> [Erasure 
> coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html]
>  (EC) is a hadoop-3 feature which can drastically reduce storage 
> requirements, at the expense of locality. At my company we have a few hbase 
> clusters which are extremely data dense and take mostly write traffic, fewer 
> reads (cold data). We'd like to reduce the cost of these clusters, and EC is 
> a great way to do that since it can reduce replication related storage costs 
> by 50%.
> It's possible to enable EC policies on sub directories of HDFS. One can 
> manually set this with {{{}hdfs ec -setPolicy -path 
> /hbase/data/default/usertable -policy {}}}. This can work without any 
> hbase support.
> One problem with that is a lack of visibility by operators into which tables 
> might have EC enabled. I think this is where HBase can help. Here's my 
> proposal:
>  * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY
>  * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, 
> verify that the requested policy is available and enabled via 
> DistributedFileSystem.
> getErasureCodingPolicies().
>  * During ModifyTableProcedure, add a new state for 
> MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY.
>  ** When adding or changing a policy, use DistributedFileSystem.
> setErasureCodingPolicy to sync it for the data and archive dir of that table 
> (or column in table)
>  ** When removing the property or setting it to empty, use 
> DistributedFileSystem.
> unsetErasureCodingPolicy to remove it from the data and archive dir.
> Since this new API is in hadoop-3 only, we'll need to add a reflection 
> wrapper class for managing the calls and verifying that the API is available. 
> We'll similarly do that API check in preflightChecks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28216) HDFS erasure coding support for table data dirs

2023-11-22 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788857#comment-17788857
 ] 

Wei-Chiu Chuang commented on HBASE-28216:
-

No that's fine. We're pursuing that in a separate branch HBASE-27740.
(reminder to myself: finish review HBASE-27769 today)

I was under the impression that setErasureCodingPolicy requires HDFS admin user 
privilege. But checking the code again looks like it requires just the write 
privilege of that directory.


We recently added EC support in Apache Impala (check out Cloudera doc 
https://docs.cloudera.com/cdw-runtime/1.5.1/impala-reference/topics/impala-ec-policies.html
 the doc talks about Ozone EC but it works the same way for HDFS EC) but we did 
not add the support for Impala to update table EC properties.

> HDFS erasure coding support for table data dirs
> ---
>
> Key: HBASE-28216
> URL: https://issues.apache.org/jira/browse/HBASE-28216
> Project: HBase
>  Issue Type: New Feature
>Reporter: Bryan Beaudreault
>Priority: Major
>
> [Erasure 
> coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html]
>  (EC) is a hadoop-3 feature which can drastically reduce storage 
> requirements, at the expense of locality. At my company we have a few hbase 
> clusters which are extremely data dense and take mostly write traffic, fewer 
> reads (cold data). We'd like to reduce the cost of these clusters, and EC is 
> a great way to do that since it can reduce replication related storage costs 
> by 50%.
> It's possible to enable EC policies on sub directories of HDFS. One can 
> manually set this with {{{}hdfs ec -setPolicy -path 
> /hbase/data/default/usertable -policy {}}}. This can work without any 
> hbase support.
> One problem with that is a lack of visibility by operators into which tables 
> might have EC enabled. I think this is where HBase can help. Here's my 
> proposal:
>  * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY
>  * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, 
> verify that the requested policy is available and enabled via 
> DistributedFileSystem.
> getErasureCodingPolicies().
>  * During ModifyTableProcedure, add a new state for 
> MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY.
>  ** When adding or changing a policy, use DistributedFileSystem.
> setErasureCodingPolicy to sync it for the data and archive dir of that table 
> (or column in table)
>  ** When removing the property or setting it to empty, use 
> DistributedFileSystem.
> unsetErasureCodingPolicy to remove it from the data and archive dir.
> Since this new API is in hadoop-3 only, we'll need to add a reflection 
> wrapper class for managing the calls and verifying that the API is available. 
> We'll similarly do that API check in preflightChecks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-28216) HDFS erasure coding support for table data dirs

2023-11-22 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-28216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17788853#comment-17788853
 ] 

Wei-Chiu Chuang commented on HBASE-28216:
-

Make sense to me. Although [~taklwu] and I have been trying to reduce the 
reliance on DistributedFileSystem.
I suspect we want to expose EC related APIs to Hadoop Common, eventually.

Note:
{code}
hdfs ec -setPolicy -path /hbase/data/default/usertable -policy 
{code}
The command affects new files in the directory. It does not migrate existing 
files in the directory automatically. (we were planning to support this but got 
stalled)

> HDFS erasure coding support for table data dirs
> ---
>
> Key: HBASE-28216
> URL: https://issues.apache.org/jira/browse/HBASE-28216
> Project: HBase
>  Issue Type: New Feature
>Reporter: Bryan Beaudreault
>Priority: Major
>
> [Erasure 
> coding|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html]
>  (EC) is a hadoop-3 feature which can drastically reduce storage 
> requirements, at the expense of locality. At my company we have a few hbase 
> clusters which are extremely data dense and take mostly write traffic, fewer 
> reads (cold data). We'd like to reduce the cost of these clusters, and EC is 
> a great way to do that since it can reduce replication related storage costs 
> by 50%.
> It's possible to enable EC policies on sub directories of HDFS. One can 
> manually set this with {{{}hdfs ec -setPolicy -path 
> /hbase/data/default/usertable -policy {}}}. This can work without any 
> hbase support.
> One problem with that is a lack of visibility by operators into which tables 
> might have EC enabled. I think this is where HBase can help. Here's my 
> proposal:
>  * Add a new TableDescriptor and ColumnDescriptor field ERASURE_CODING_POLICY
>  * In ModifyTableProcedure preflightChecks, if ERASURE_CODING_POLICY is set, 
> verify that the requested policy is available and enabled via 
> DistributedFileSystem.
> getErasureCodingPolicies().
>  * During ModifyTableProcedure, add a new state for 
> MODIFY_TABLE_SYNC_ERASURE_CODING_POLICY.
>  ** When adding or changing a policy, use DistributedFileSystem.
> setErasureCodingPolicy to sync it for the data and archive dir of that table 
> (or column in table)
>  ** When removing the property or setting it to empty, use 
> DistributedFileSystem.
> unsetErasureCodingPolicy to remove it from the data and archive dir.
> Since this new API is in hadoop-3 only, we'll need to add a reflection 
> wrapper class for managing the calls and verifying that the API is available. 
> We'll similarly do that API check in preflightChecks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27982) Synchronous replication should check if the file system supports truncate API

2023-07-18 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-27982:
---

 Summary: Synchronous replication should check if the file system 
supports truncate API
 Key: HBASE-27982
 URL: https://issues.apache.org/jira/browse/HBASE-27982
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


Ok. I missed this but I was just told that the synchronous replication 
leverages the truncate() FS API.

https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/replication/SyncReplicationReplayWALManager.java#L282

Ozone does not implement truncate so calling this method on the WAL FS will 
result in an exception. It would be a better user experience to alert user that 
this is not supported from the start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27931) Update hadoop.version to 3.3.6

2023-06-14 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-27931:
---

 Summary: Update hadoop.version to 3.3.6
 Key: HBASE-27931
 URL: https://issues.apache.org/jira/browse/HBASE-27931
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


HBase's default Hadoop3 version is 3.2.4 but HBase already supports Haddoop 
3.3.x.

Hadoop 3.2 line has not been updated for over a year. It is perhaps the time to 
update the Hadoop dependency to the 3.3.x line. (I'll start a DISCUSS thread if 
the test goes well)

3.3.6 RC is out which fixed a bunch of CVEs and I'd like to test HBase against 
it. Additionally, Hadoop 3.3.6 will permit us to use non-HDFS as WAL storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HBASE-27877) Hbase ImportTsv doesn't take ofs:// as a FS

2023-05-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725524#comment-17725524
 ] 

Wei-Chiu Chuang edited comment on HBASE-27877 at 5/23/23 6:07 PM:
--

Try specify -Dfs.defaultFS=ofs://ozone1/ and see if that addresses the issue.
Maybe the solution is to update hbase reference guide stating that the 
workaround is to add the parameter whenever the error message is seen.


was (Author: jojochuang):
Try specify -Dfs.defaultFS=ofs://ozone1/ and see if that addresses the issue.

> Hbase ImportTsv doesn't take ofs:// as a FS
> ---
>
> Key: HBASE-27877
> URL: https://issues.apache.org/jira/browse/HBASE-27877
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Priority: Major
>  Labels: ozone
>
> While running the bulkLoad command:
> {noformat}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv 
> -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload 
> -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 
> -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles
>  table_dau3f3374e 
> ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}
> Getting:
> {noformat}
> 2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" 
> java.lang.IllegalArgumentException: Wrong FS: 
> ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842,
>  expected: hdfs://ns1{noformat}
> Complete trace:
> {noformat}
> server-resourcemanager-3.1.1.7.1.8.3-339.jar:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-registry-3.1.1.7.1.8.3-339.jar
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client 
> environment:java.library.path=/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/bin/../lib/hadoop/lib/native
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.compiler=
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.version=5.4.0-135-generic
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.name=hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.home=/home/hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.dir=/hwqe/hadoopqe
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.free=108MB
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.max=228MB
> 2023-05-22 17:01:19,927|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.total=145MB
> 2023-05-22 17:01:19,930|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Initiating client connection, 
> connectString=ozn-lease16-1.ozn-lease16.root.hwx.site:2181,ozn-lease16-2.ozn-lease16.root.hwx.site:2181,ozn-lease16-3.ozn-lease16.root.hwx.site:2181
>  sessionTimeout=3 
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/169226355@454e763
> 2023-05-22 17:01:19,942|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 

[jira] [Commented] (HBASE-27877) Hbase ImportTsv doesn't take ofs:// as a FS

2023-05-23 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17725524#comment-17725524
 ] 

Wei-Chiu Chuang commented on HBASE-27877:
-

Try specify -Dfs.defaultFS=ofs://ozone1/ and see if that addresses the issue.

> Hbase ImportTsv doesn't take ofs:// as a FS
> ---
>
> Key: HBASE-27877
> URL: https://issues.apache.org/jira/browse/HBASE-27877
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Priority: Major
>  Labels: ozone
>
> While running the bulkLoad command:
> {noformat}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv 
> -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload 
> -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 
> -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles
>  table_dau3f3374e 
> ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}
> Getting:
> {noformat}
> 2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" 
> java.lang.IllegalArgumentException: Wrong FS: 
> ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842,
>  expected: hdfs://ns1{noformat}
> Complete trace:
> {noformat}
> server-resourcemanager-3.1.1.7.1.8.3-339.jar:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-registry-3.1.1.7.1.8.3-339.jar
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client 
> environment:java.library.path=/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/bin/../lib/hadoop/lib/native
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.compiler=
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.version=5.4.0-135-generic
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.name=hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.home=/home/hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.dir=/hwqe/hadoopqe
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.free=108MB
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.max=228MB
> 2023-05-22 17:01:19,927|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.total=145MB
> 2023-05-22 17:01:19,930|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Initiating client connection, 
> connectString=ozn-lease16-1.ozn-lease16.root.hwx.site:2181,ozn-lease16-2.ozn-lease16.root.hwx.site:2181,ozn-lease16-3.ozn-lease16.root.hwx.site:2181
>  sessionTimeout=3 
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/169226355@454e763
> 2023-05-22 17:01:19,942|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true 
> to disable client-initiated TLS renegotiation
> 2023-05-22 17:01:19,952|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ClientCnxnSocket: 

[jira] [Updated] (HBASE-27740) Support Ozone as a WAL backing storage

2023-05-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-27740:

Labels: ozone  (was: )

> Support Ozone as a WAL backing storage
> --
>
> Key: HBASE-27740
> URL: https://issues.apache.org/jira/browse/HBASE-27740
> Project: HBase
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: ozone
> Fix For: 2.6.0, 3.0.0-alpha-4
>
>
> As discussed in the 
> [thread|https://lists.apache.org/thread/tcrp8vxxs3z12y36mpzx35txhpp7tvxv], 
> we'd like to make HBase workloads possible on Ozone.
> This feature is to be built on top of 
> # HDDS-7593 (support hsync and lease recovery in Ozone), and
> # HADOOP-18671 (Add recoverLease(), setSafeMode(), isFileClosed() APIs to 
> FileSystem).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27877) Hbase ImportTsv doesn't take ofs:// as a FS

2023-05-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-27877:

Labels: ozone  (was: )

> Hbase ImportTsv doesn't take ofs:// as a FS
> ---
>
> Key: HBASE-27877
> URL: https://issues.apache.org/jira/browse/HBASE-27877
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Priority: Major
>  Labels: ozone
>
> While running the bulkLoad command:
> {noformat}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv 
> -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload 
> -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 
> -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles
>  table_dau3f3374e 
> ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}
> Getting:
> {noformat}
> 2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" 
> java.lang.IllegalArgumentException: Wrong FS: 
> ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842,
>  expected: hdfs://ns1{noformat}
> Complete trace:
> {noformat}
> server-resourcemanager-3.1.1.7.1.8.3-339.jar:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-registry-3.1.1.7.1.8.3-339.jar
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client 
> environment:java.library.path=/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/bin/../lib/hadoop/lib/native
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.compiler=
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.version=5.4.0-135-generic
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.name=hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.home=/home/hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.dir=/hwqe/hadoopqe
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.free=108MB
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.max=228MB
> 2023-05-22 17:01:19,927|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.total=145MB
> 2023-05-22 17:01:19,930|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Initiating client connection, 
> connectString=ozn-lease16-1.ozn-lease16.root.hwx.site:2181,ozn-lease16-2.ozn-lease16.root.hwx.site:2181,ozn-lease16-3.ozn-lease16.root.hwx.site:2181
>  sessionTimeout=3 
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/169226355@454e763
> 2023-05-22 17:01:19,942|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true 
> to disable client-initiated TLS renegotiation
> 2023-05-22 17:01:19,952|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ClientCnxnSocket: jute.maxbuffer value is 4194304 Bytes
> 2023-05-22 

[jira] [Moved] (HBASE-27877) Hbase ImportTsv doesn't take ofs:// as a FS

2023-05-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang moved HDDS-8673 to HBASE-27877:
---

  Component/s: (was: Ozone Filesystem)
  Key: HBASE-27877  (was: HDDS-8673)
Affects Version/s: (was: 1.4.0)
 Workflow: no-reopen-closed, patch-avail  (was: patch-available, 
re-open possible)
   Issue Type: Bug  (was: Task)
  Project: HBase  (was: Apache Ozone)

> Hbase ImportTsv doesn't take ofs:// as a FS
> ---
>
> Key: HBASE-27877
> URL: https://issues.apache.org/jira/browse/HBASE-27877
> Project: HBase
>  Issue Type: Bug
>Reporter: Pratyush Bhatt
>Priority: Major
>
> While running the bulkLoad command:
> {noformat}
> hbase org.apache.hadoop.hbase.mapreduce.ImportTsv 
> -Dhbase.fs.tmp.dir=ofs://ozone1/vol1/bucket1/hbase/bulkload 
> -Dimporttsv.columns=HBASE_ROW_KEY,d:c1,d:c2 
> -Dimporttsv.bulk.output=ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/hfiles
>  table_dau3f3374e 
> ofs://ozone1/vol1/bucket1/hbase/test_VerifyHBaseNoWriteBulkloadHDFSQuota/data.tsv{noformat}
> Getting:
> {noformat}
> 2023-05-22 17:01:23,263|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|Exception in thread "main" 
> java.lang.IllegalArgumentException: Wrong FS: 
> ofs://ozone1/vol1/bucket1/hbase/bulkload/partitions_72cbb1f1-d9b6-46a4-be39-e27a427c5842,
>  expected: hdfs://ns1{noformat}
> Complete trace:
> {noformat}
> server-resourcemanager-3.1.1.7.1.8.3-339.jar:/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/lib/hadoop/libexec/../../hadoop-yarn/.//hadoop-yarn-registry-3.1.1.7.1.8.3-339.jar
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client 
> environment:java.library.path=/opt/cloudera/parcels/CDH-7.1.8-1.cdh7.1.8.p3.40935426/bin/../lib/hadoop/lib/native
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:java.compiler=
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2023-05-22 17:01:19,925|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.version=5.4.0-135-generic
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.name=hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.home=/home/hrt_qa
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:user.dir=/hwqe/hadoopqe
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.free=108MB
> 2023-05-22 17:01:19,926|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.max=228MB
> 2023-05-22 17:01:19,927|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Client environment:os.memory.total=145MB
> 2023-05-22 17:01:19,930|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> zookeeper.ZooKeeper: Initiating client connection, 
> connectString=ozn-lease16-1.ozn-lease16.root.hwx.site:2181,ozn-lease16-2.ozn-lease16.root.hwx.site:2181,ozn-lease16-3.ozn-lease16.root.hwx.site:2181
>  sessionTimeout=3 
> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$16/169226355@454e763
> 2023-05-22 17:01:19,942|INFO|MainThread|machine.py:203 - 
> run()||GUID=427ca366-5f0f-426c-a2e5-0c4e12bdae2d|23/05/22 17:01:19 INFO 
> common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true 
> to 

[jira] [Resolved] (HBASE-27693) Support for Hadoop's LDAP Authentication mechanism (Web UI only)

2023-04-28 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-27693.
-
Fix Version/s: 3.0.0-alpha-4
   Resolution: Fixed

> Support for Hadoop's LDAP Authentication mechanism (Web UI only)
> 
>
> Key: HBASE-27693
> URL: https://issues.apache.org/jira/browse/HBASE-27693
> Project: HBase
>  Issue Type: New Feature
>Reporter: Yash Dodeja
>Assignee: Yash Dodeja
>Priority: Major
> Fix For: 3.0.0-alpha-4
>
> Attachments: Screenshot 2023-03-27 at 10.53.26 AM.png
>
>
> Hadoop's AuthenticationFilter has changed and now has support for ldap 
> mechanism too. HBase still uses an older version tightly coupled with 
> kerberos and spnego as the only auth mechanisms. HADOOP-12082 has added 
> support for multiple auth handlers including LDAP. On trying to use Hadoop's 
> AuthenticationFilterInitializer in hbase.http.filter.initializers, there is a 
> casting exception as HBase requires it to extend 
> org.apache.hadoop.hbase.http.FilterInitializer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27693) Support for Hadoop's LDAP Authentication mechanism (Web UI only)

2023-04-28 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-27693:

Summary: Support for Hadoop's LDAP Authentication mechanism (Web UI only)  
(was: Support for Hadoop's LDAP Authentication mechanism)

> Support for Hadoop's LDAP Authentication mechanism (Web UI only)
> 
>
> Key: HBASE-27693
> URL: https://issues.apache.org/jira/browse/HBASE-27693
> Project: HBase
>  Issue Type: New Feature
>Reporter: Yash Dodeja
>Assignee: Yash Dodeja
>Priority: Major
> Attachments: Screenshot 2023-03-27 at 10.53.26 AM.png
>
>
> Hadoop's AuthenticationFilter has changed and now has support for ldap 
> mechanism too. HBase still uses an older version tightly coupled with 
> kerberos and spnego as the only auth mechanisms. HADOOP-12082 has added 
> support for multiple auth handlers including LDAP. On trying to use Hadoop's 
> AuthenticationFilterInitializer in hbase.http.filter.initializers, there is a 
> casting exception as HBase requires it to extend 
> org.apache.hadoop.hbase.http.FilterInitializer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-27746) Check if the file system supports storage policy before invoking setStoragePolicy()

2023-04-19 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-27746 started by Wei-Chiu Chuang.
---
> Check if the file system supports storage policy before invoking 
> setStoragePolicy()
> ---
>
> Key: HBASE-27746
> URL: https://issues.apache.org/jira/browse/HBASE-27746
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> Found these messages on an Ozone cluster:
> {noformat}
> 2023-03-20 12:27:09,185 WARN org.apache.hadoop.hbase.util.CommonFSUtils: 
> Unable to set storagePolicy=HOT for 
> path=ofs://ozone1/vol1/bucket1/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc.
>  DEBUG log level might have more details.
> java.lang.UnsupportedOperationException: RootedOzoneFileSystem doesn't 
> support setStoragePolicy
> at 
> org.apache.hadoop.fs.FileSystem.setStoragePolicy(FileSystem.java:3227)
> at 
> org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:521)
> at 
> org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:504)
> at 
> org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:477)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.setStoragePolicy(HRegionFileSystem.java:225)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:275)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:6387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1115)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1112)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Ozone does not support storage policy. If we use 
> FileSystem.hasPathCapability() API to check before invoking the API, these 
> warning messages can be avoided.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HBASE-27746) Check if the file system supports storage policy before invoking setStoragePolicy()

2023-04-19 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-27746:
---

Assignee: Wei-Chiu Chuang

> Check if the file system supports storage policy before invoking 
> setStoragePolicy()
> ---
>
> Key: HBASE-27746
> URL: https://issues.apache.org/jira/browse/HBASE-27746
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> Found these messages on an Ozone cluster:
> {noformat}
> 2023-03-20 12:27:09,185 WARN org.apache.hadoop.hbase.util.CommonFSUtils: 
> Unable to set storagePolicy=HOT for 
> path=ofs://ozone1/vol1/bucket1/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc.
>  DEBUG log level might have more details.
> java.lang.UnsupportedOperationException: RootedOzoneFileSystem doesn't 
> support setStoragePolicy
> at 
> org.apache.hadoop.fs.FileSystem.setStoragePolicy(FileSystem.java:3227)
> at 
> org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:521)
> at 
> org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:504)
> at 
> org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:477)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.setStoragePolicy(HRegionFileSystem.java:225)
> at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:275)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:6387)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1115)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1112)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Ozone does not support storage policy. If we use 
> FileSystem.hasPathCapability() API to check before invoking the API, these 
> warning messages can be avoided.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27746) Check if the file system supports storage policy before invoking setStoragePolicy()

2023-03-23 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-27746:
---

 Summary: Check if the file system supports storage policy before 
invoking setStoragePolicy()
 Key: HBASE-27746
 URL: https://issues.apache.org/jira/browse/HBASE-27746
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


Found these messages on an Ozone cluster:

{noformat}
2023-03-20 12:27:09,185 WARN org.apache.hadoop.hbase.util.CommonFSUtils: Unable 
to set storagePolicy=HOT for 
path=ofs://ozone1/vol1/bucket1/hbase/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc.
 DEBUG log level might have more details.
java.lang.UnsupportedOperationException: RootedOzoneFileSystem doesn't support 
setStoragePolicy
at 
org.apache.hadoop.fs.FileSystem.setStoragePolicy(FileSystem.java:3227)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:521)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:504)
at 
org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:477)
at 
org.apache.hadoop.hbase.regionserver.HRegionFileSystem.setStoragePolicy(HRegionFileSystem.java:225)
at org.apache.hadoop.hbase.regionserver.HStore.(HStore.java:275)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:6387)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1115)
at 
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1112)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}

Ozone does not support storage policy. If we use FileSystem.hasPathCapability() 
API to check before invoking the API, these warning messages can be avoided.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27740) Support Ozone as a WAL backing storage

2023-03-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-27740:

Description: 
As discussed in the 
[thread|https://lists.apache.org/thread/tcrp8vxxs3z12y36mpzx35txhpp7tvxv], we'd 
like to make HBase workloads possible on Ozone.

This feature is to be built on top of 
# HDDS-7593 (support hsync and lease recovery in Ozone), and
# HADOOP-18671 (Add recoverLease(), setSafeMode(), isFileClosed() APIs to 
FileSystem).


  was:
As discussed in the 
[thread|https://lists.apache.org/thread/tcrp8vxxs3z12y36mpzx35txhpp7tvxv], we'd 
like to make HBase workloads possible on Ozone.

This features is to be built on top of 
# HDDS-7593 (support hsync and lease recovery in Ozone), and
# HADOOP-18671 (Add recoverLease(), setSafeMode(), isFileClosed() APIs to 
FileSystem).



> Support Ozone as a WAL backing storage
> --
>
> Key: HBASE-27740
> URL: https://issues.apache.org/jira/browse/HBASE-27740
> Project: HBase
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As discussed in the 
> [thread|https://lists.apache.org/thread/tcrp8vxxs3z12y36mpzx35txhpp7tvxv], 
> we'd like to make HBase workloads possible on Ozone.
> This feature is to be built on top of 
> # HDDS-7593 (support hsync and lease recovery in Ozone), and
> # HADOOP-18671 (Add recoverLease(), setSafeMode(), isFileClosed() APIs to 
> FileSystem).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27740) Support Ozone as a WAL backing storage

2023-03-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17702979#comment-17702979
 ] 

Wei-Chiu Chuang commented on HBASE-27740:
-

cc: [~swu]

> Support Ozone as a WAL backing storage
> --
>
> Key: HBASE-27740
> URL: https://issues.apache.org/jira/browse/HBASE-27740
> Project: HBase
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As discussed in the 
> [thread|https://lists.apache.org/thread/tcrp8vxxs3z12y36mpzx35txhpp7tvxv], 
> we'd like to make HBase workloads possible on Ozone.
> This features is to be built on top of 
> # HDDS-7593 (support hsync and lease recovery in Ozone), and
> # HADOOP-18671 (Add recoverLease(), setSafeMode(), isFileClosed() APIs to 
> FileSystem).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27740) Support Ozone as a WAL backing storage

2023-03-20 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-27740:
---

 Summary: Support Ozone as a WAL backing storage
 Key: HBASE-27740
 URL: https://issues.apache.org/jira/browse/HBASE-27740
 Project: HBase
  Issue Type: New Feature
Reporter: Wei-Chiu Chuang


As discussed in the 
[thread|https://lists.apache.org/thread/tcrp8vxxs3z12y36mpzx35txhpp7tvxv], we'd 
like to make HBase workloads possible on Ozone.

This features is to be built on top of 
# HDDS-7593 (support hsync and lease recovery in Ozone), and
# HADOOP-18671 (Add recoverLease(), setSafeMode(), isFileClosed() APIs to 
FileSystem).




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27739) Remove Java reflection used in FSUtils.create()

2023-03-20 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-27739:
---

 Summary: Remove Java reflection used in FSUtils.create() 
 Key: HBASE-27739
 URL: https://issues.apache.org/jira/browse/HBASE-27739
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


FSUtils.create() uses reflection to access a HDFS API 
DistributedFileSystem.create() that supports favored node.

This API is added in Hadoop 2.1.0-beta (HDFS-2576) so we can use it directly 
without reflection.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27738) Remove DNS reflection helper method

2023-03-20 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-27738:
---

 Summary: Remove DNS reflection helper method
 Key: HBASE-27738
 URL: https://issues.apache.org/jira/browse/HBASE-27738
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


HBASE-14594 used a reflection helper method to use an Hadoop API added in 
2.8.0. We should remove this reflection from the master branch now.

https://github.com/apache/hbase/blob/fbe3b90e0c4eef1dc13fb2a5ed9381106ca671dd/hbase-common/src/main/java/org/apache/hadoop/hbase/util/DNS.java#L57



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-26734) FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1

2022-02-10 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490051#comment-17490051
 ] 

Wei-Chiu Chuang commented on HBASE-26734:
-

I wrote that part of the code so I'll take a closer look. slf4j-log4j2 doesn't 
seem related to this issue.
Is it reproducible? Could you run it at debug log level? Just like to confirm 
it's not the older Hadoop artifacts that got slipped in accidentally.

> FanOutOneBlockAsyncDFSOutputHelper stuck when run against hadoop-3.3.1
> --
>
> Key: HBASE-26734
> URL: https://issues.apache.org/jira/browse/HBASE-26734
> Project: HBase
>  Issue Type: Sub-task
> Environment: JDK: jdk1.8.0_221
> Hadoop: hadoop-3.3.1
> Hbase: hbase-2.3.1 / hbase-2.3.7
>Reporter: chen qing
>Priority: Major
> Attachments: hbase-root-master-master.log, 
> hbase-root-regionserver-slave01.log
>
>
> I just had the same problem when i started the hbase cluster. HRegionServers 
> were started and HMaster threw an exception.
> This is HMaster's log:
> {code:java}
> 2022-02-05 18:07:51,323 WARN  [RS-EventLoopGroup-1-1] 
> concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete()
> java.lang.IllegalArgumentException: object is not an instance of declaring 
> class
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:69)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:343)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:425)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:183)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:419)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:477)
>         at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:472)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
>         at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:615)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:653)
>         at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:529)
>         at 
> 

[jira] [Assigned] (HBASE-26046) [JDK17] Add a JDK17 profile

2022-01-27 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26046:
---

Assignee: (was: Wei-Chiu Chuang)

> [JDK17] Add a JDK17 profile
> ---
>
> Key: HBASE-26046
> URL: https://issues.apache.org/jira/browse/HBASE-26046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> While HBase builds fine with JDK17, tests fail because a number of Java SDK 
> modules are no longer exposed to unnamed modules by default. We need to open 
> them up.
> Without which, the tests fail for errors like:
> {noformat}
> [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 
> s <<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel
> [ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel 
>  Time elapsed: 0.273 s  <<< ERROR!
> java.lang.ExceptionInInitializerError
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws 
> java.lang.ClassFormatError accessible: module java.base does not "opens 
> java.lang" to unnamed module @56ef9176
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26046) [JDK17] Add a JDK17 profile

2022-01-27 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17483558#comment-17483558
 ] 

Wei-Chiu Chuang commented on HBASE-26046:
-

I had worked on it but it's a shame I thought the PR was up for review.

You would need these for JDK17:
https://github.com/jojochuang/hbase/commit/b909db7ca7c221308ad5aba1ea58317c77358b94

I'm tied up with the log4j stuff right now so wont' be able to continue on it. 
Feel free to pick this up [~ndimiduk]

> [JDK17] Add a JDK17 profile
> ---
>
> Key: HBASE-26046
> URL: https://issues.apache.org/jira/browse/HBASE-26046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> While HBase builds fine with JDK17, tests fail because a number of Java SDK 
> modules are no longer exposed to unnamed modules by default. We need to open 
> them up.
> Without which, the tests fail for errors like:
> {noformat}
> [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 
> s <<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel
> [ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel 
>  Time elapsed: 0.273 s  <<< ERROR!
> java.lang.ExceptionInInitializerError
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws 
> java.lang.ClassFormatError accessible: module java.base does not "opens 
> java.lang" to unnamed module @56ef9176
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26691) Replacing log4j with reload4j for branch-2.x

2022-01-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479838#comment-17479838
 ] 

Wei-Chiu Chuang commented on HBASE-26691:
-

There's a DISCUSS thread in Hadoop's dev ML. We should start one in HBase's dev 
ML.

> Replacing log4j with reload4j for branch-2.x
> 
>
> Key: HBASE-26691
> URL: https://issues.apache.org/jira/browse/HBASE-26691
> Project: HBase
>  Issue Type: Task
>  Components: logging
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.10
>
>
> There are several new CVEs for log4j1 now.
> As it is not suitable to upgrade to log4j2 for 2.x releases, let's replace 
> the log4j1 dependencies with reload4j.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26691) Replacing log4j with reload4j for branch-2.x

2022-01-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479836#comment-17479836
 ] 

Wei-Chiu Chuang commented on HBASE-26691:
-

The reload4j is a drop-in replacement of log4j1. 

Although in reality, the shading makes it not so trivial as it sounds...

> Replacing log4j with reload4j for branch-2.x
> 
>
> Key: HBASE-26691
> URL: https://issues.apache.org/jira/browse/HBASE-26691
> Project: HBase
>  Issue Type: Task
>  Components: logging
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.10
>
>
> There are several new CVEs for log4j1 now.
> As it is not suitable to upgrade to log4j2 for 2.x releases, let's replace 
> the log4j1 dependencies with reload4j.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26691) Replacing log4j with reload4j for branch-2.x

2022-01-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17479781#comment-17479781
 ] 

Wei-Chiu Chuang commented on HBASE-26691:
-

+1

> Replacing log4j with reload4j for branch-2.x
> 
>
> Key: HBASE-26691
> URL: https://issues.apache.org/jira/browse/HBASE-26691
> Project: HBase
>  Issue Type: Task
>  Components: logging
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Critical
> Fix For: 2.5.0, 2.4.10
>
>
> There are several new CVEs for log4j1 now.
> As it is not suitable to upgrade to log4j2 for 2.x releases, let's replace 
> the log4j1 dependencies with reload4j.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (HBASE-22953) Supporting Hadoop 3.3.0

2021-12-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-22953.
-
Fix Version/s: 2.3.0
   3.0.0-alpha-1
   Resolution: Fixed

> Supporting Hadoop 3.3.0
> ---
>
> Key: HBASE-22953
> URL: https://issues.apache.org/jira/browse/HBASE-22953
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.3.0, 3.0.0-alpha-1
>
>
> The Hadoop community has started to discuss a 3.3.0 release. 
> [http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201908.mbox/%3CCAD%2B%2BeCneLtC%2BkfxRRKferufnNxhaXXGa0YPaVp%3DEBbc-R5JfqA%40mail.gmail.com%3E]
> While still early, it wouldn't hurt to start exploring what's coming in 
> Hadoop 3.3.0. In particular, there are a bunch of new features that brings in 
> all sorts of new dependencies.
>  
> I will use this Jira to list things that are related to Hadoop 3.3.0.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (HBASE-26047) [JDK17] Track JDK17 unit test failures

2021-10-18 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17429981#comment-17429981
 ] 

Wei-Chiu Chuang commented on HBASE-26047:
-

Mind to share more details? HBASE-25516 is supposed to fix the modifiers field 
exception.

> [JDK17] Track JDK17 unit test failures
> --
>
> Key: HBASE-26047
> URL: https://issues.apache.org/jira/browse/HBASE-26047
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As of now, there are still two failed unit tests after exporting JDK internal 
> modules and the modifier access hack.
> {noformat}
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 
> s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes  Time elapsed: 
> 0.041 s  <<< FAILURE!
> java.lang.AssertionError: expected:<160> but was:<152>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335)
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes  Time 
> elapsed: 0.01 s  <<< FAILURE!
> java.lang.AssertionError: expected:<72> but was:<64>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134)
> [INFO] Running org.apache.hadoop.hbase.io.Tes
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 
> s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain
> [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy  Time 
> elapsed: 0.537 s  <<< ERROR!
> java.lang.NullPointerException: Cannot enter synchronized block because 
> "this.closeLock" is null
> at 
> org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119)
> {noformat}
> It appears that JDK17 makes the heap size estimate different than before. Not 
> sure why.
> TestBufferChain.testWithSpy  failure might be because of yet another 
> unexported module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26047) [JDK17] Track JDK17 unit test failures

2021-10-11 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17427411#comment-17427411
 ] 

Wei-Chiu Chuang commented on HBASE-26047:
-

Thanks. I got sidetracked by other projects.
It would be great to understand the failure in TestHeapSize. The heap size 
estimate is quite involved and I am not confident i can address them. IIRC 
TestThreadLocalPoolMap is similar.

TestSecureExportSnapshot
TestMobSecureExportSnapshot
TestVerifyReplicationCrossDiffHdfs
--> they all failed for some error inside distcp/MapReduce. To troubleshoot 
them we need to enable logging for HDFS/YARN in the UT.

> [JDK17] Track JDK17 unit test failures
> --
>
> Key: HBASE-26047
> URL: https://issues.apache.org/jira/browse/HBASE-26047
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As of now, there are still two failed unit tests after exporting JDK internal 
> modules and the modifier access hack.
> {noformat}
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 
> s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes  Time elapsed: 
> 0.041 s  <<< FAILURE!
> java.lang.AssertionError: expected:<160> but was:<152>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335)
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes  Time 
> elapsed: 0.01 s  <<< FAILURE!
> java.lang.AssertionError: expected:<72> but was:<64>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134)
> [INFO] Running org.apache.hadoop.hbase.io.Tes
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 
> s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain
> [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy  Time 
> elapsed: 0.537 s  <<< ERROR!
> java.lang.NullPointerException: Cannot enter synchronized block because 
> "this.closeLock" is null
> at 
> org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119)
> {noformat}
> It appears that JDK17 makes the heap size estimate different than before. Not 
> sure why.
> TestBufferChain.testWithSpy  failure might be because of yet another 
> unexported module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26198) RegionServer dead on hadoop 3.3.1: NoSuchMethodError LocatedBlocks.getLocations()

2021-08-16 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17400070#comment-17400070
 ] 

Wei-Chiu Chuang commented on HBASE-26198:
-

I am pretty sure CDH6.2 HBase doesn't compile against Hadoop 3.3.1.

Did you simply replace the CDH Hadoop jars with Apache Hadoop 3.3.1? If so, I 
can believe it doesn't work out of box due to HDFS-15255.

There are a number of HBase changes you would need to apply on top of CDH HBase 
6.2.0. If you can apply them and recompile against Apache Hadoop 3.3.1, you 
should be able to run.

> RegionServer dead on hadoop 3.3.1: NoSuchMethodError 
> LocatedBlocks.getLocations()
> -
>
> Key: HBASE-26198
> URL: https://issues.apache.org/jira/browse/HBASE-26198
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: mengqi
>Priority: Major
> Attachments: 4ad46153842c29898189b90fc986925c87966ce6.diff, 
> image-2021-08-16-16-24-32-418.png
>
>
> !image-2021-08-16-16-24-32-418.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26198) regionserver dead on hadoop 3.3.1

2021-08-16 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399662#comment-17399662
 ] 

Wei-Chiu Chuang commented on HBASE-26198:
-

I suspect it was because the method signature of the private HDFS API 
LocatedBlock.getLocations() was changed by HDFS-15255. But even in that case it 
shouldn't fail for that exception, especially since I ran the UTs which should 
cover this code path.

Can you check if you have multiple Hadoop libraries in the classpath? It's 
likely you have both hadoop 3.3.1 and 3.1.2 (default) in the classpath.

> regionserver dead on hadoop 3.3.1
> -
>
> Key: HBASE-26198
> URL: https://issues.apache.org/jira/browse/HBASE-26198
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: mengqi
>Priority: Major
> Attachments: 4ad46153842c29898189b90fc986925c87966ce6.diff, 
> image-2021-08-16-16-24-32-418.png
>
>
> !image-2021-08-16-16-24-32-418.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26160) Configurable disallowlist for live editing of loglevels

2021-08-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-26160.
-
Resolution: Fixed

Thank you, [~bbeaudreault] for contributing the patch.

> Configurable disallowlist for live editing of loglevels
> ---
>
> Key: HBASE-26160
> URL: https://issues.apache.org/jira/browse/HBASE-26160
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6
>
>
> We currently use log4j/slf4j for audit logging in AccessController. This is 
> convenient but presents a security/compliance risk because we allow 
> live-editing of logLevels via the UI. One can simply set the logger to OFF 
> and then perform actions un-audited.
> We should add a configuration for setting certain log levels to read-only



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26160) Configurable disallowlist for live editing of loglevels

2021-08-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26160:

Fix Version/s: 2.4.6

> Configurable disallowlist for live editing of loglevels
> ---
>
> Key: HBASE-26160
> URL: https://issues.apache.org/jira/browse/HBASE-26160
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2, 2.4.6
>
>
> We currently use log4j/slf4j for audit logging in AccessController. This is 
> convenient but presents a security/compliance risk because we allow 
> live-editing of logLevels via the UI. One can simply set the logger to OFF 
> and then perform actions un-audited.
> We should add a configuration for setting certain log levels to read-only



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26160) Configurable disallowlist for live editing of loglevels

2021-08-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26160:

Fix Version/s: 3.0.0-alpha-2
   2.5.0

> Configurable disallowlist for live editing of loglevels
> ---
>
> Key: HBASE-26160
> URL: https://issues.apache.org/jira/browse/HBASE-26160
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> We currently use log4j/slf4j for audit logging in AccessController. This is 
> convenient but presents a security/compliance risk because we allow 
> live-editing of logLevels via the UI. One can simply set the logger to OFF 
> and then perform actions un-audited.
> We should add a configuration for setting certain log levels to read-only



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-21946) Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable

2021-07-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-21946.
-
Resolution: Fixed

> Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable
> --
>
> Key: HBASE-21946
> URL: https://issues.apache.org/jira/browse/HBASE-21946
> Project: HBase
>  Issue Type: Improvement
>  Components: Offheaping
>Reporter: Zheng Hu
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2
>
> Attachments: HBASE-21946.HBASE-21879.v01.patch, 
> HBASE-21946.HBASE-21879.v02.patch, HBASE-21946.HBASE-21879.v03.patch, 
> HBASE-21946.HBASE-21879.v04.patch
>
>
> [~stakiar] is working on HDFS-3246,  so now we have to keep the byte[] pread 
> in HFileBlock reading.  Once it get resolved, we can upgrade the hadoop 
> version and do the replacement. 
> I think it will be a great p999 latency improvement in 100% Get case, anyway 
> file a issue address this firstly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26049) Remove DfsBuilderUtility

2021-07-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-26049.
-
Resolution: Fixed

I left the commit in master, because i forgot the replicate() API isn't 
available until Hadoop 3.

> Remove DfsBuilderUtility
> 
>
> Key: HBASE-26049
> URL: https://issues.apache.org/jira/browse/HBASE-26049
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> DfsBuilderUtility was created to reflectively access 
> DistributedFileSystem$HdfsDataOutputStreamBuilder, which was added by 
> HDFS-11170 and available since Hadoop 2.9.0.
> We can remove this class and access the HDFS builder class directly in HBase 
> 3 and 2.5.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26049) Remove DfsBuilderUtility

2021-07-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26049:

Fix Version/s: (was: 2.5.0)

> Remove DfsBuilderUtility
> 
>
> Key: HBASE-26049
> URL: https://issues.apache.org/jira/browse/HBASE-26049
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> DfsBuilderUtility was created to reflectively access 
> DistributedFileSystem$HdfsDataOutputStreamBuilder, which was added by 
> HDFS-11170 and available since Hadoop 2.9.0.
> We can remove this class and access the HDFS builder class directly in HBase 
> 3 and 2.5.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22953) Supporting Hadoop 3.3.0

2021-07-22 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17385352#comment-17385352
 ] 

Wei-Chiu Chuang commented on HBASE-22953:
-

it depends on what you mean by supported.
>From a functional/API compat prespective, it is supported (Hadoop 3.3.0 and 
>3.3.1).
>From a performance stand point, i am not aware of people testing HBase on 
>Hadoop 3.3 (I've not had the chance to do this)

> Supporting Hadoop 3.3.0
> ---
>
> Key: HBASE-22953
> URL: https://issues.apache.org/jira/browse/HBASE-22953
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> The Hadoop community has started to discuss a 3.3.0 release. 
> [http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201908.mbox/%3CCAD%2B%2BeCneLtC%2BkfxRRKferufnNxhaXXGa0YPaVp%3DEBbc-R5JfqA%40mail.gmail.com%3E]
> While still early, it wouldn't hurt to start exploring what's coming in 
> Hadoop 3.3.0. In particular, there are a bunch of new features that brings in 
> all sorts of new dependencies.
>  
> I will use this Jira to list things that are related to Hadoop 3.3.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26057) Remove reflections used to access Hadoop 2 API in FanOutOneBlockAsyncDFSOutputHelper

2021-07-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-26057.
-
Resolution: Fixed

> Remove reflections used to access Hadoop 2 API in 
> FanOutOneBlockAsyncDFSOutputHelper
> 
>
> Key: HBASE-26057
> URL: https://issues.apache.org/jira/browse/HBASE-26057
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> Remove the reflections used to access Hadoop 2 APIs in HBase 3.x.
> There are still a number of reflections we can't remove now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26050) Remove the reflection used in FSUtils.isInSafeMode

2021-07-02 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-26050.
-
Fix Version/s: 3.0.0-alpha-2
   2.5.0
   Resolution: Fixed

> Remove the reflection used in FSUtils.isInSafeMode
> --
>
> Key: HBASE-26050
> URL: https://issues.apache.org/jira/browse/HBASE-26050
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> DistributedFileSystem.setSafeMode() was added by HDFS-3507 in Hadoop 
> 2.0.3-alpha. No need to access via reflection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26047) [JDK17] Track JDK17 unit test failures

2021-07-01 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17373144#comment-17373144
 ] 

Wei-Chiu Chuang commented on HBASE-26047:
-

In addition, the following tests failed too:

TestThreadLocalPoolMap
TestHeapSize
TestSecureExportSnapshot
TestMobSecureExportSnapshot
TestVerifyReplicationCrossDiffHdfs

> [JDK17] Track JDK17 unit test failures
> --
>
> Key: HBASE-26047
> URL: https://issues.apache.org/jira/browse/HBASE-26047
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As of now, there are still two failed unit tests after exporting JDK internal 
> modules and the modifier access hack.
> {noformat}
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 
> s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes  Time elapsed: 
> 0.041 s  <<< FAILURE!
> java.lang.AssertionError: expected:<160> but was:<152>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335)
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes  Time 
> elapsed: 0.01 s  <<< FAILURE!
> java.lang.AssertionError: expected:<72> but was:<64>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134)
> [INFO] Running org.apache.hadoop.hbase.io.Tes
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 
> s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain
> [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy  Time 
> elapsed: 0.537 s  <<< ERROR!
> java.lang.NullPointerException: Cannot enter synchronized block because 
> "this.closeLock" is null
> at 
> org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119)
> {noformat}
> It appears that JDK17 makes the heap size estimate different than before. Not 
> sure why.
> TestBufferChain.testWithSpy  failure might be because of yet another 
> unexported module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26057) Remove reflections used to access Hadoop 2 API in FanOutOneBlockAsyncDFSOutputHelper

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26057:

Fix Version/s: 3.0.0-alpha-2

> Remove reflections used to access Hadoop 2 API in 
> FanOutOneBlockAsyncDFSOutputHelper
> 
>
> Key: HBASE-26057
> URL: https://issues.apache.org/jira/browse/HBASE-26057
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> Remove the reflections used to access Hadoop 2 APIs in HBase 3.x.
> There are still a number of reflections we can't remove now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26057) Remove reflections used to access Hadoop 2 API in FanOutOneBlockAsyncDFSOutputHelper

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26057:
---

Assignee: Wei-Chiu Chuang

> Remove reflections used to access Hadoop 2 API in 
> FanOutOneBlockAsyncDFSOutputHelper
> 
>
> Key: HBASE-26057
> URL: https://issues.apache.org/jira/browse/HBASE-26057
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> Remove the reflections used to access Hadoop 2 APIs in HBase 3.x.
> There are still a number of reflections we can't remove now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26057) Remove reflections used to access Hadoop 2 API in FanOutOneBlockAsyncDFSOutputHelper

2021-06-30 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26057:
---

 Summary: Remove reflections used to access Hadoop 2 API in 
FanOutOneBlockAsyncDFSOutputHelper
 Key: HBASE-26057
 URL: https://issues.apache.org/jira/browse/HBASE-26057
 Project: HBase
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang


Remove the reflections used to access Hadoop 2 APIs in HBase 3.x.
There are still a number of reflections we can't remove now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26051) Remove reflections used to access HDFS EC APIs

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26051:

Description: 
HDFS EC APIs exists since Hadoop 3.0.
We can access them directly in HBase 3.0 without reflections.

  was:
HDFS EC APIs exists since Hadoop 3.0.
We can access them directly without reflections in HBase 3.0


> Remove reflections used to access HDFS EC APIs
> --
>
> Key: HBASE-26051
> URL: https://issues.apache.org/jira/browse/HBASE-26051
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> HDFS EC APIs exists since Hadoop 3.0.
> We can access them directly in HBase 3.0 without reflections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26051) Remove reflections used to access HDFS EC APIs

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26051:

Fix Version/s: 3.0.0-alpha-1

> Remove reflections used to access HDFS EC APIs
> --
>
> Key: HBASE-26051
> URL: https://issues.apache.org/jira/browse/HBASE-26051
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> HDFS EC APIs exists since Hadoop 3.0.
> We can access them directly without reflections in HBase 3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26051) Remove reflections used to access HDFS EC APIs

2021-06-30 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26051:
---

 Summary: Remove reflections used to access HDFS EC APIs
 Key: HBASE-26051
 URL: https://issues.apache.org/jira/browse/HBASE-26051
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha-1
Reporter: Wei-Chiu Chuang


HDFS EC APIs exists since Hadoop 3.0.
We can access them directly without reflections in HBase 3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26051) Remove reflections used to access HDFS EC APIs

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26051:
---

Assignee: Wei-Chiu Chuang

> Remove reflections used to access HDFS EC APIs
> --
>
> Key: HBASE-26051
> URL: https://issues.apache.org/jira/browse/HBASE-26051
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> HDFS EC APIs exists since Hadoop 3.0.
> We can access them directly without reflections in HBase 3.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26050) Remove the reflection used in FSUtils.isInSafeMode

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26050:
---

Assignee: Wei-Chiu Chuang

> Remove the reflection used in FSUtils.isInSafeMode
> --
>
> Key: HBASE-26050
> URL: https://issues.apache.org/jira/browse/HBASE-26050
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> DistributedFileSystem.setSafeMode() was added by HDFS-3507 in Hadoop 
> 2.0.3-alpha. No need to access via reflection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26050) Remove the reflection used in FSUtils.isInSafeMode

2021-06-30 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26050:
---

 Summary: Remove the reflection used in FSUtils.isInSafeMode
 Key: HBASE-26050
 URL: https://issues.apache.org/jira/browse/HBASE-26050
 Project: HBase
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang


DistributedFileSystem.setSafeMode() was added by HDFS-3507 in Hadoop 
2.0.3-alpha. No need to access via reflection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26019) Remove reflections used in HBaseConfiguration.getPassword()

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26019:

Parent: HBASE-26040
Issue Type: Sub-task  (was: Improvement)

> Remove reflections used in HBaseConfiguration.getPassword()
> ---
>
> Key: HBASE-26019
> URL: https://issues.apache.org/jira/browse/HBASE-26019
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> HBaseConfiguration.getPassword() uses Hadoop API Configuration.getPassword(). 
>  The API was added in Hadoop 2.6.0. Reflection was used to access the API. 
> It's time to remove the reflection and invoke the API directly. (HBase 3.0 as 
> well as 2.x too)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26049) Remove DfsBuilderUtility

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26049:
---

Assignee: Wei-Chiu Chuang

> Remove DfsBuilderUtility
> 
>
> Key: HBASE-26049
> URL: https://issues.apache.org/jira/browse/HBASE-26049
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.5.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> DfsBuilderUtility was created to reflectively access 
> DistributedFileSystem$HdfsDataOutputStreamBuilder, which was added by 
> HDFS-11170 and available since Hadoop 2.9.0.
> We can remove this class and access the HDFS builder class directly in HBase 
> 3 and 2.5.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26041) Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26041:

Fix Version/s: 2.5.0
   3.0.0-alpha-1

> Replace PrintThreadInfoHelper with HBase's own 
> ReflectionUtils.printThreadInfo()
> 
>
> Key: HBASE-26041
> URL: https://issues.apache.org/jira/browse/HBASE-26041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> PrintThreadInfoLazyHolder uses reflection to access Hadoop's 
> ReflectionUtils.printThreadInfo(). Replace it with HBase's 
> ReflectionUtils.printThreadInfo().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26049) Remove DfsBuilderUtility

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26049:

Fix Version/s: 2.5.0
   3.0.0-alpha-1

> Remove DfsBuilderUtility
> 
>
> Key: HBASE-26049
> URL: https://issues.apache.org/jira/browse/HBASE-26049
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha-1, 2.5.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> DfsBuilderUtility was created to reflectively access 
> DistributedFileSystem$HdfsDataOutputStreamBuilder, which was added by 
> HDFS-11170 and available since Hadoop 2.9.0.
> We can remove this class and access the HDFS builder class directly in HBase 
> 3 and 2.5.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26049) Remove DfsBuilderUtility

2021-06-30 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26049:
---

 Summary: Remove DfsBuilderUtility
 Key: HBASE-26049
 URL: https://issues.apache.org/jira/browse/HBASE-26049
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 3.0.0-alpha-1, 2.5.0
Reporter: Wei-Chiu Chuang


DfsBuilderUtility was created to reflectively access 
DistributedFileSystem$HdfsDataOutputStreamBuilder, which was added by 
HDFS-11170 and available since Hadoop 2.9.0.

We can remove this class and access the HDFS builder class directly in HBase 3 
and 2.5.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26046) [JDK17] Add a JDK17 profile

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26046:

Summary: [JDK17] Add a JDK17 profile  (was: Add a JDK17 profile)

> [JDK17] Add a JDK17 profile
> ---
>
> Key: HBASE-26046
> URL: https://issues.apache.org/jira/browse/HBASE-26046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> While HBase builds fine with JDK17, tests fail because a number of Java SDK 
> modules are no longer exposed to unnamed modules by default. We need to open 
> them up.
> Without which, the tests fail for errors like:
> {noformat}
> [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 
> s <<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel
> [ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel 
>  Time elapsed: 0.273 s  <<< ERROR!
> java.lang.ExceptionInInitializerError
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws 
> java.lang.ClassFormatError accessible: module java.base does not "opens 
> java.lang" to unnamed module @56ef9176
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25516) [JDK17] reflective access Field.class.getDeclaredField("modifiers") not supported

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-25516:

Summary: [JDK17] reflective access 
Field.class.getDeclaredField("modifiers") not supported  (was: jdk11 reflective 
access Field.class.getDeclaredField("modifiers") not supported)

> [JDK17] reflective access Field.class.getDeclaredField("modifiers") not 
> supported
> -
>
> Key: HBASE-25516
> URL: https://issues.apache.org/jira/browse/HBASE-25516
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filesystem Integration
>Affects Versions: 2.3.3
> Environment: Windows 10, JavaSE11, pom dependencies:
> {code:java}
> 
>   org.apache.hbase
>   hbase-testing-util
>   2.3.3
> 
> 
>   junit
>   junit
>   4.12
> {code}
>Reporter: Leon Bein
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: jdk11
>
> The reflective access
> {code:java}
> Field.class.getDeclaredField("modifiers")
> {code}
> in HFileSystem.java:334 leads to a warning (and probably error?):
>  
> {code:java}
> java.lang.NoSuchFieldException: modifiers
>   at java.base/java.lang.Class.getDeclaredField(Class.java:2417)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:334)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:291)
>   at org.apache.hadoop.hbase.fs.HFileSystem.(HFileSystem.java:96)
>   at org.apache.hadoop.hbase.fs.HFileSystem.get(HFileSystem.java:465)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getTestFileSystem(HBaseTestingUtility.java:3330)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getNewDataTestDirOnTestFS(HBaseTestingUtility.java:565)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.setupDataTestDirOnTestFS(HBaseTestingUtility.java:554)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDataTestDirOnTestFS(HBaseTestingUtility.java:527)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDefaultRootDirPath(HBaseTestingUtility.java:1415)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1446)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1157)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1144)
>   at foo.Main.main(Main.java:11)
> {code}
> when running the following code:
>  
> {code:java}
> public static void main(String[] args) throws Exception {
> HBaseTestingUtility utility = new 
> HBaseTestingUtility(HBaseConfiguration.create());
> 
> utility.startMiniCluster(StartMiniClusterOption.builder().numRegionServers(3).build());
> }{code}
> From my knowledge this results from the more restrictive reflection 
> protection of java.base classes in the newer java versions.
>  
> Related to HBASE-22972
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26047) [JDK17] Track JDK17 unit test failures

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26047:

Summary: [JDK17] Track JDK17 unit test failures  (was: Track JDK17 unit 
test failures)

> [JDK17] Track JDK17 unit test failures
> --
>
> Key: HBASE-26047
> URL: https://issues.apache.org/jira/browse/HBASE-26047
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> As of now, there are still two failed unit tests after exporting JDK internal 
> modules and the modifier access hack.
> {noformat}
> [ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 
> s <<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes  Time elapsed: 
> 0.041 s  <<< FAILURE!
> java.lang.AssertionError: expected:<160> but was:<152>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335)
> [ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes  Time 
> elapsed: 0.01 s  <<< FAILURE!
> java.lang.AssertionError: expected:<72> but was:<64>
> at 
> org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134)
> [INFO] Running org.apache.hadoop.hbase.io.Tes
> [ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 
> s <<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain
> [ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy  Time 
> elapsed: 0.537 s  <<< ERROR!
> java.lang.NullPointerException: Cannot enter synchronized block because 
> "this.closeLock" is null
> at 
> org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119)
> {noformat}
> It appears that JDK17 makes the heap size estimate different than before. Not 
> sure why.
> TestBufferChain.testWithSpy  failure might be because of yet another 
> unexported module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26048) [JDK17] Replace the usage of deprecated API ThreadGroup.destroy()

2021-06-30 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26048:
---

 Summary: [JDK17] Replace the usage of deprecated API 
ThreadGroup.destroy()
 Key: HBASE-26048
 URL: https://issues.apache.org/jira/browse/HBASE-26048
 Project: HBase
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang


According to the JDK17 doc, ThreadGroup.destroy() is deprecated because
{quote}Deprecated, for removal: This API element is subject to removal in a 
future version.
{quote}
The API and mechanism for destroying a ThreadGroup is inherently flawed. The 
ability to explicitly or automatically destroy a thread group will be removed 
in a future release.

[https://download.java.net/java/early_access/jdk17/docs/api/java.base/java/lang/ThreadGroup.html#destroy(])

We don't necessarily need to remove this usage now, but the warning sounds bad 
enough.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26047) Track JDK17 unit test failures

2021-06-30 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26047:
---

 Summary: Track JDK17 unit test failures
 Key: HBASE-26047
 URL: https://issues.apache.org/jira/browse/HBASE-26047
 Project: HBase
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang


As of now, there are still two failed unit tests after exporting JDK internal 
modules and the modifier access hack.

{noformat}

[ERROR] Tests run: 7, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.217 s 
<<< FAILURE! - in org.apache.hadoop.hbase.io.TestHeapSize
[ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testSizes  Time elapsed: 0.041 
s  <<< FAILURE!
java.lang.AssertionError: expected:<160> but was:<152>
at 
org.apache.hadoop.hbase.io.TestHeapSize.testSizes(TestHeapSize.java:335)

[ERROR] org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes  Time elapsed: 
0.01 s  <<< FAILURE!
java.lang.AssertionError: expected:<72> but was:<64>
at 
org.apache.hadoop.hbase.io.TestHeapSize.testNativeSizes(TestHeapSize.java:134)

[INFO] Running org.apache.hadoop.hbase.io.Tes


[ERROR] Tests run: 5, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.697 s 
<<< FAILURE! - in org.apache.hadoop.hbase.ipc.TestBufferChain
[ERROR] org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy  Time elapsed: 
0.537 s  <<< ERROR!
java.lang.NullPointerException: Cannot enter synchronized block because 
"this.closeLock" is null
at 
org.apache.hadoop.hbase.ipc.TestBufferChain.testWithSpy(TestBufferChain.java:119)


{noformat}

It appears that JDK17 makes the heap size estimate different than before. Not 
sure why.

TestBufferChain.testWithSpy  failure might be because of yet another unexported 
module.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26038) Support JDK17

2021-06-30 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371826#comment-17371826
 ] 

Wei-Chiu Chuang commented on HBASE-26038:
-

Note:
All recent Maven versions 3.5.4/ 3.6.3 / 3.8.1 builds HBase successful.

I am using JDK 17 early access version build 25: build 17-ea+25-2252

> Support JDK17
> -
>
> Key: HBASE-26038
> URL: https://issues.apache.org/jira/browse/HBASE-26038
> Project: HBase
>  Issue Type: New Feature
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> JDK 17 is the next Java LTS, coming out this September.
> It brings a number of goodies. One of which is the production-ready ZGC 
> (available since JDK15). (as well as the Shenandoah GC, available since JDK15)
> After September 2021, there will be three Java LTS versions: Java 8, Java 11 
> and Java 17. Java 8 will still be the mainstream SDK in the foreseeable 
> future, so I am not looking to take advantage of the new APIs that are only 
> available in JDK17. This jira aims to support HBase on all three JDK LTS.
> Porting HBase to JDK17 is not a big hurdle. HBase (master branch) builds 
> successfully on JDK17. A few tests fail mostly due to the new (more strict) 
> Java module isolation enforcement. I have a small PoC that I will post here 
> in the coming days.
> What I am trying to achieve is to add an experimental support for JDK17. It 
> will be interesting to benchmark HBase on ZGC and Shenandoah and determine if 
> we should set our default GC to them. By then, we'll be able to claim 
> production-ready support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-25516) jdk11 reflective access Field.class.getDeclaredField("modifiers") not supported

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-25516:

Parent: HBASE-26038
Issue Type: Sub-task  (was: Bug)

> jdk11 reflective access Field.class.getDeclaredField("modifiers") not 
> supported
> ---
>
> Key: HBASE-25516
> URL: https://issues.apache.org/jira/browse/HBASE-25516
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filesystem Integration
>Affects Versions: 2.3.3
> Environment: Windows 10, JavaSE11, pom dependencies:
> {code:java}
> 
>   org.apache.hbase
>   hbase-testing-util
>   2.3.3
> 
> 
>   junit
>   junit
>   4.12
> {code}
>Reporter: Leon Bein
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: jdk11
>
> The reflective access
> {code:java}
> Field.class.getDeclaredField("modifiers")
> {code}
> in HFileSystem.java:334 leads to a warning (and probably error?):
>  
> {code:java}
> java.lang.NoSuchFieldException: modifiers
>   at java.base/java.lang.Class.getDeclaredField(Class.java:2417)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:334)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:291)
>   at org.apache.hadoop.hbase.fs.HFileSystem.(HFileSystem.java:96)
>   at org.apache.hadoop.hbase.fs.HFileSystem.get(HFileSystem.java:465)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getTestFileSystem(HBaseTestingUtility.java:3330)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getNewDataTestDirOnTestFS(HBaseTestingUtility.java:565)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.setupDataTestDirOnTestFS(HBaseTestingUtility.java:554)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDataTestDirOnTestFS(HBaseTestingUtility.java:527)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDefaultRootDirPath(HBaseTestingUtility.java:1415)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1446)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1157)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1144)
>   at foo.Main.main(Main.java:11)
> {code}
> when running the following code:
>  
> {code:java}
> public static void main(String[] args) throws Exception {
> HBaseTestingUtility utility = new 
> HBaseTestingUtility(HBaseConfiguration.create());
> 
> utility.startMiniCluster(StartMiniClusterOption.builder().numRegionServers(3).build());
> }{code}
> From my knowledge this results from the more restrictive reflection 
> protection of java.base classes in the newer java versions.
>  
> Related to HBASE-22972
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-25516) jdk11 reflective access Field.class.getDeclaredField("modifiers") not supported

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-25516:
---

Assignee: Wei-Chiu Chuang

> jdk11 reflective access Field.class.getDeclaredField("modifiers") not 
> supported
> ---
>
> Key: HBASE-25516
> URL: https://issues.apache.org/jira/browse/HBASE-25516
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration
>Affects Versions: 2.3.3
> Environment: Windows 10, JavaSE11, pom dependencies:
> {code:java}
> 
>   org.apache.hbase
>   hbase-testing-util
>   2.3.3
> 
> 
>   junit
>   junit
>   4.12
> {code}
>Reporter: Leon Bein
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: jdk11
>
> The reflective access
> {code:java}
> Field.class.getDeclaredField("modifiers")
> {code}
> in HFileSystem.java:334 leads to a warning (and probably error?):
>  
> {code:java}
> java.lang.NoSuchFieldException: modifiers
>   at java.base/java.lang.Class.getDeclaredField(Class.java:2417)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:334)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:291)
>   at org.apache.hadoop.hbase.fs.HFileSystem.(HFileSystem.java:96)
>   at org.apache.hadoop.hbase.fs.HFileSystem.get(HFileSystem.java:465)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getTestFileSystem(HBaseTestingUtility.java:3330)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getNewDataTestDirOnTestFS(HBaseTestingUtility.java:565)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.setupDataTestDirOnTestFS(HBaseTestingUtility.java:554)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDataTestDirOnTestFS(HBaseTestingUtility.java:527)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDefaultRootDirPath(HBaseTestingUtility.java:1415)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1446)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1157)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1144)
>   at foo.Main.main(Main.java:11)
> {code}
> when running the following code:
>  
> {code:java}
> public static void main(String[] args) throws Exception {
> HBaseTestingUtility utility = new 
> HBaseTestingUtility(HBaseConfiguration.create());
> 
> utility.startMiniCluster(StartMiniClusterOption.builder().numRegionServers(3).build());
> }{code}
> From my knowledge this results from the more restrictive reflection 
> protection of java.base classes in the newer java versions.
>  
> Related to HBASE-22972
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26046) Add a JDK17 profile

2021-06-30 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26046:
---

Assignee: Wei-Chiu Chuang

> Add a JDK17 profile
> ---
>
> Key: HBASE-26046
> URL: https://issues.apache.org/jira/browse/HBASE-26046
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> While HBase builds fine with JDK17, tests fail because a number of Java SDK 
> modules are no longer exposed to unnamed modules by default. We need to open 
> them up.
> Without which, the tests fail for errors like:
> {noformat}
> [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 
> s <<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel
> [ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel 
>  Time elapsed: 0.273 s  <<< ERROR!
> java.lang.ExceptionInInitializerError
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
> protected final java.lang.Class 
> java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws 
> java.lang.ClassFormatError accessible: module java.base does not "opens 
> java.lang" to unnamed module @56ef9176
> at 
> org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26046) Add a JDK17 profile

2021-06-30 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26046:
---

 Summary: Add a JDK17 profile
 Key: HBASE-26046
 URL: https://issues.apache.org/jira/browse/HBASE-26046
 Project: HBase
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang


While HBase builds fine with JDK17, tests fail because a number of Java SDK 
modules are no longer exposed to unnamed modules by default. We need to open 
them up.

Without which, the tests fail for errors like:
{noformat}
[ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.469 s 
<<< FAILURE! - in org.apache.hadoop.hbase.rest.model.TestNamespacesModel
[ERROR] org.apache.hadoop.hbase.rest.model.TestNamespacesModel.testBuildModel  
Time elapsed: 0.273 s  <<< ERROR!
java.lang.ExceptionInInitializerError
at 
org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make 
protected final java.lang.Class 
java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws 
java.lang.ClassFormatError accessible: module java.base does not "opens 
java.lang" to unnamed module @56ef9176
at 
org.apache.hadoop.hbase.rest.model.TestNamespacesModel.(TestNamespacesModel.java:43)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26041) Replace PrintThreadInfoHelper with HBase's own ReflectionUtils.printThreadInfo()

2021-06-29 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26041:

Summary: Replace PrintThreadInfoHelper with HBase's own 
ReflectionUtils.printThreadInfo()  (was: Replace PrintThreadInfoLazyHolder's 
reflection usage)

> Replace PrintThreadInfoHelper with HBase's own 
> ReflectionUtils.printThreadInfo()
> 
>
> Key: HBASE-26041
> URL: https://issues.apache.org/jira/browse/HBASE-26041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> PrintThreadInfoLazyHolder uses reflection to access Hadoop's 
> ReflectionUtils.printThreadInfo(). Replace it with HBase's 
> ReflectionUtils.printThreadInfo().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26041) Replace PrintThreadInfoLazyHolder's reflection usage

2021-06-29 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26041:
---

Assignee: Wei-Chiu Chuang

> Replace PrintThreadInfoLazyHolder's reflection usage
> 
>
> Key: HBASE-26041
> URL: https://issues.apache.org/jira/browse/HBASE-26041
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> PrintThreadInfoLazyHolder uses reflection to access Hadoop's 
> ReflectionUtils.printThreadInfo(). Replace it with HBase's 
> ReflectionUtils.printThreadInfo().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26041) Replace PrintThreadInfoLazyHolder's reflection usage

2021-06-29 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26041:
---

 Summary: Replace PrintThreadInfoLazyHolder's reflection usage
 Key: HBASE-26041
 URL: https://issues.apache.org/jira/browse/HBASE-26041
 Project: HBase
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang


PrintThreadInfoLazyHolder uses reflection to access Hadoop's 
ReflectionUtils.printThreadInfo(). Replace it with HBase's 
ReflectionUtils.printThreadInfo().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26040) Replace reflections that are redundant

2021-06-29 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26040:
---

Assignee: Wei-Chiu Chuang

> Replace reflections that are redundant
> --
>
> Key: HBASE-26040
> URL: https://issues.apache.org/jira/browse/HBASE-26040
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> A number of reflections were used to access (back in the time) new Hadoop 
> APIs that were only available in newer Hadoop versions.
> Some of them are no longer needed with the default Hadoop dependency 3.1.2, 
> so they can be removed to avoid the brittle code. Also, makes it possible to 
> determine compile time dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26040) Replace reflections that are redundant

2021-06-29 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26040:
---

 Summary: Replace reflections that are redundant
 Key: HBASE-26040
 URL: https://issues.apache.org/jira/browse/HBASE-26040
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


A number of reflections were used to access (back in the time) new Hadoop APIs 
that were only available in newer Hadoop versions.

Some of them are no longer needed with the default Hadoop dependency 3.1.2, so 
they can be removed to avoid the brittle code. Also, makes it possible to 
determine compile time dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25516) jdk11 reflective access Field.class.getDeclaredField("modifiers") not supported

2021-06-29 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371277#comment-17371277
 ] 

Wei-Chiu Chuang commented on HBASE-25516:
-

As it turns out, while it doesn't break on JDK11, HBase UT fails due to this 
issue when running on JDK17.

> jdk11 reflective access Field.class.getDeclaredField("modifiers") not 
> supported
> ---
>
> Key: HBASE-25516
> URL: https://issues.apache.org/jira/browse/HBASE-25516
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration
>Affects Versions: 2.3.3
> Environment: Windows 10, JavaSE11, pom dependencies:
> {code:java}
> 
>   org.apache.hbase
>   hbase-testing-util
>   2.3.3
> 
> 
>   junit
>   junit
>   4.12
> {code}
>Reporter: Leon Bein
>Priority: Major
>  Labels: jdk11
>
> The reflective access
> {code:java}
> Field.class.getDeclaredField("modifiers")
> {code}
> in HFileSystem.java:334 leads to a warning (and probably error?):
>  
> {code:java}
> java.lang.NoSuchFieldException: modifiers
>   at java.base/java.lang.Class.getDeclaredField(Class.java:2417)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:334)
>   at 
> org.apache.hadoop.hbase.fs.HFileSystem.addLocationsOrderInterceptor(HFileSystem.java:291)
>   at org.apache.hadoop.hbase.fs.HFileSystem.(HFileSystem.java:96)
>   at org.apache.hadoop.hbase.fs.HFileSystem.get(HFileSystem.java:465)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getTestFileSystem(HBaseTestingUtility.java:3330)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getNewDataTestDirOnTestFS(HBaseTestingUtility.java:565)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.setupDataTestDirOnTestFS(HBaseTestingUtility.java:554)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDataTestDirOnTestFS(HBaseTestingUtility.java:527)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.getDefaultRootDirPath(HBaseTestingUtility.java:1415)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1446)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1157)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1144)
>   at foo.Main.main(Main.java:11)
> {code}
> when running the following code:
>  
> {code:java}
> public static void main(String[] args) throws Exception {
> HBaseTestingUtility utility = new 
> HBaseTestingUtility(HBaseConfiguration.create());
> 
> utility.startMiniCluster(StartMiniClusterOption.builder().numRegionServers(3).build());
> }{code}
> From my knowledge this results from the more restrictive reflection 
> protection of java.base classes in the newer java versions.
>  
> Related to HBASE-22972
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26038) Support JDK17

2021-06-29 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26038:
---

 Summary: Support JDK17
 Key: HBASE-26038
 URL: https://issues.apache.org/jira/browse/HBASE-26038
 Project: HBase
  Issue Type: New Feature
Reporter: Wei-Chiu Chuang


JDK 17 is the next Java LTS, coming out this September.

It brings a number of goodies. One of which is the production-ready ZGC 
(available since JDK15). (as well as the Shenandoah GC, available since JDK15)

After September 2021, there will be three Java LTS versions: Java 8, Java 11 
and Java 17. Java 8 will still be the mainstream SDK in the foreseeable future, 
so I am not looking to take advantage of the new APIs that are only available 
in JDK17. This jira aims to support HBase on all three JDK LTS.

Porting HBase to JDK17 is not a big hurdle. HBase (master branch) builds 
successfully on JDK17. A few tests fail mostly due to the new (more strict) 
Java module isolation enforcement. I have a small PoC that I will post here in 
the coming days.

What I am trying to achieve is to add an experimental support for JDK17. It 
will be interesting to benchmark HBase on ZGC and Shenandoah and determine if 
we should set our default GC to them. By then, we'll be able to claim 
production-ready support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23817) The message "Please make sure that backup is enabled on the cluster." is shown even when the backup feature is enabled

2021-06-29 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-23817.
-
Fix Version/s: 3.0.0-alpha-1
   Resolution: Fixed

Thanks for the review [~brfrn169] merged the PR.

> The message "Please make sure that backup is enabled on the cluster." is 
> shown even when the backup feature is enabled
> --
>
> Key: HBASE-23817
> URL: https://issues.apache.org/jira/browse/HBASE-23817
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Fix For: 3.0.0-alpha-1
>
>
> The following message is shown even when the backup feature is enabled, which 
> is confusing:
> {code}
> Please make sure that backup is enabled on the cluster. To enable backup, in 
> hbase-site.xml, set:
>  hbase.backup.enable=true
> hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
> hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
> hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
> hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver
> and restart the cluster
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-21946) Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable

2021-06-28 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21946:

Summary: Use ByteBuffer pread instead of byte[] pread in HFileBlock when 
applicable  (was: Replace the byte[] pread by ByteBuffer pread in HFileBlock 
reading once HDFS-3246 prepared)

> Use ByteBuffer pread instead of byte[] pread in HFileBlock when applicable
> --
>
> Key: HBASE-21946
> URL: https://issues.apache.org/jira/browse/HBASE-21946
> Project: HBase
>  Issue Type: Improvement
>  Components: Offheaping
>Reporter: Zheng Hu
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2
>
> Attachments: HBASE-21946.HBASE-21879.v01.patch, 
> HBASE-21946.HBASE-21879.v02.patch, HBASE-21946.HBASE-21879.v03.patch, 
> HBASE-21946.HBASE-21879.v04.patch
>
>
> [~stakiar] is working on HDFS-3246,  so now we have to keep the byte[] pread 
> in HFileBlock reading.  Once it get resolved, we can upgrade the hadoop 
> version and do the replacement. 
> I think it will be a great p999 latency improvement in 100% Get case, anyway 
> file a issue address this firstly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-21946) Replace the byte[] pread by ByteBuffer pread in HFileBlock reading once HDFS-3246 prepared

2021-06-28 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370485#comment-17370485
 ] 

Wei-Chiu Chuang commented on HBASE-21946:
-

I have a patch that uses DFSInputStream.hasCapability() API to detect which 
read() API to use.
Using this approach, I can come up with a solution that is compatible with any 
Hadoop version >= 2.9.1.

[~openinx] can i take over this jira?

> Replace the byte[] pread by ByteBuffer pread in HFileBlock reading once 
> HDFS-3246 prepared
> --
>
> Key: HBASE-21946
> URL: https://issues.apache.org/jira/browse/HBASE-21946
> Project: HBase
>  Issue Type: Improvement
>  Components: Offheaping
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 2.5.0, 3.0.0-alpha-2
>
> Attachments: HBASE-21946.HBASE-21879.v01.patch, 
> HBASE-21946.HBASE-21879.v02.patch, HBASE-21946.HBASE-21879.v03.patch, 
> HBASE-21946.HBASE-21879.v04.patch
>
>
> [~stakiar] is working on HDFS-3246,  so now we have to keep the byte[] pread 
> in HFileBlock reading.  Once it get resolved, we can upgrade the hadoop 
> version and do the replacement. 
> I think it will be a great p999 latency improvement in 100% Get case, anyway 
> file a issue address this firstly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23817) The message "Please make sure that backup is enabled on the cluster." is shown even when the backup feature is enabled

2021-06-25 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-23817:
---

Assignee: Wei-Chiu Chuang

> The message "Please make sure that backup is enabled on the cluster." is 
> shown even when the backup feature is enabled
> --
>
> Key: HBASE-23817
> URL: https://issues.apache.org/jira/browse/HBASE-23817
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>
> The following message is shown even when the backup feature is enabled, which 
> is confusing:
> {code}
> Please make sure that backup is enabled on the cluster. To enable backup, in 
> hbase-site.xml, set:
>  hbase.backup.enable=true
> hbase.master.logcleaner.plugins=YOUR_PLUGINS,org.apache.hadoop.hbase.backup.master.BackupLogCleaner
> hbase.procedure.master.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager
> hbase.procedure.regionserver.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager
> hbase.coprocessor.region.classes=YOUR_CLASSES,org.apache.hadoop.hbase.backup.BackupObserver
> and restart the cluster
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26032) Make HRegion.getStores() an O(1) operation

2021-06-24 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26032:

Description: 
This is a relatively minor issue, but I did spot HRegion.getStores() popping up 
in my profiler.

Checking the code, I realized that HRegion.getStores() allocates a new array 
list in it, converting the Collection<> to List<>. But it also makes it an O( n 
) in space and time complexity.

This conversion appears mostly unnecessary, because we only iterate the stores 
in production code, and so the new ArrayList object is thrown away immediately. 
Only in a number of test code where we index into the stores.

I suggest we should return the stores object directly, an O( 1 ) operation.

  was:
This is a relatively minor issue, but I did spot HRegion.getStores() popping up 
in my profiler.

Checking the code, I realized that HRegion.getStores() allocates a new array 
list in it, converting the Collection<> to List<>. But it also makes it an O( n 
) in space and time complexity.

This conversion appears mostly unnecessary, because we only iterate the stores 
in production code, and so the new ArrayList object is thrown away immediately. 
Only in a number of test code where we index into the stores.


> Make HRegion.getStores() an O(1) operation
> --
>
> Key: HBASE-26032
> URL: https://issues.apache.org/jira/browse/HBASE-26032
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: Screen Shot 2021-06-24 at 3.56.33 PM.png
>
>
> This is a relatively minor issue, but I did spot HRegion.getStores() popping 
> up in my profiler.
> Checking the code, I realized that HRegion.getStores() allocates a new array 
> list in it, converting the Collection<> to List<>. But it also makes it an O( 
> n ) in space and time complexity.
> This conversion appears mostly unnecessary, because we only iterate the 
> stores in production code, and so the new ArrayList object is thrown away 
> immediately. Only in a number of test code where we index into the stores.
> I suggest we should return the stores object directly, an O( 1 ) operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26032) Make HRegion.getStores() an O(1) operation

2021-06-24 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26032:
---

 Summary: Make HRegion.getStores() an O(1) operation
 Key: HBASE-26032
 URL: https://issues.apache.org/jira/browse/HBASE-26032
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
 Attachments: Screen Shot 2021-06-24 at 3.56.33 PM.png

This is a relatively minor issue, but I did spot HRegion.getStores() popping up 
in my profiler.

Checking the code, I realized that HRegion.getStores() allocates a new array 
list in it, converting the Collection<> to List<>. But it also makes it an O( n 
) in space and time complexity.

This conversion appears mostly unnecessary, because we only iterate the stores 
in production code, and so the new ArrayList object is thrown away immediately. 
Only in a number of test code where we index into the stores.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26032) Make HRegion.getStores() an O(1) operation

2021-06-24 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26032:

Priority: Minor  (was: Major)

> Make HRegion.getStores() an O(1) operation
> --
>
> Key: HBASE-26032
> URL: https://issues.apache.org/jira/browse/HBASE-26032
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: Screen Shot 2021-06-24 at 3.56.33 PM.png
>
>
> This is a relatively minor issue, but I did spot HRegion.getStores() popping 
> up in my profiler.
> Checking the code, I realized that HRegion.getStores() allocates a new array 
> list in it, converting the Collection<> to List<>. But it also makes it an O( 
> n ) in space and time complexity.
> This conversion appears mostly unnecessary, because we only iterate the 
> stores in production code, and so the new ArrayList object is thrown away 
> immediately. Only in a number of test code where we index into the stores.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-26019) Remove reflections used in HBaseConfiguration.getPassword()

2021-06-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HBASE-26019.
-
Resolution: Fixed

Thanks again for the review, [~wchevreuil] [~vjasani] and [~tomscut]

The commit is cherrypicked to branch-2. Let me know if it should go to lower 
branches.

> Remove reflections used in HBaseConfiguration.getPassword()
> ---
>
> Key: HBASE-26019
> URL: https://issues.apache.org/jira/browse/HBASE-26019
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> HBaseConfiguration.getPassword() uses Hadoop API Configuration.getPassword(). 
>  The API was added in Hadoop 2.6.0. Reflection was used to access the API. 
> It's time to remove the reflection and invoke the API directly. (HBase 3.0 as 
> well as 2.x too)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26019) Remove reflections used in HBaseConfiguration.getPassword()

2021-06-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HBASE-26019:
---

Assignee: Wei-Chiu Chuang

> Remove reflections used in HBaseConfiguration.getPassword()
> ---
>
> Key: HBASE-26019
> URL: https://issues.apache.org/jira/browse/HBASE-26019
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> HBaseConfiguration.getPassword() uses Hadoop API Configuration.getPassword(). 
>  The API was added in Hadoop 2.6.0. Reflection was used to access the API. 
> It's time to remove the reflection and invoke the API directly. (HBase 3.0 as 
> well as 2.x too)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-26019) Remove reflections used in HBaseConfiguration.getPassword()

2021-06-21 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HBASE-26019:
---

 Summary: Remove reflections used in 
HBaseConfiguration.getPassword()
 Key: HBASE-26019
 URL: https://issues.apache.org/jira/browse/HBASE-26019
 Project: HBase
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


HBaseConfiguration.getPassword() uses Hadoop API Configuration.getPassword().  
The API was added in Hadoop 2.6.0. Reflection was used to access the API. It's 
time to remove the reflection and invoke the API directly. (HBase 3.0 as well 
as 2.x too)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-26019) Remove reflections used in HBaseConfiguration.getPassword()

2021-06-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-26019:

Fix Version/s: 2.5.0
   3.0.0-alpha-1

> Remove reflections used in HBaseConfiguration.getPassword()
> ---
>
> Key: HBASE-26019
> URL: https://issues.apache.org/jira/browse/HBASE-26019
> Project: HBase
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> HBaseConfiguration.getPassword() uses Hadoop API Configuration.getPassword(). 
>  The API was added in Hadoop 2.6.0. Reflection was used to access the API. 
> It's time to remove the reflection and invoke the API directly. (HBase 3.0 as 
> well as 2.x too)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-26007) java.io.IOException: Invalid token in javax.security.sasl.qop: ^DDI

2021-06-14 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17363369#comment-17363369
 ] 

Wei-Chiu Chuang edited comment on HBASE-26007 at 6/15/21, 4:20 AM:
---

You probably had an invalid configuration value in hdfs-site.xml.

Because there's no full stack trace, it's hard to know exactly which config is 
wrong, but try search "^DDI" in your config file.

Possible config keys:
dfs.encrypt.data.overwrite.downstream.new.qop
ingress.port.sasl.prop.


was (Author: jojochuang):
You probably had an invalid configuration value in hdfs-site.xml.

Because there's no full stack trace, it's hard to know exactly which config is 
wrong, but try search "^DDI" in your config file.

dfs.encrypt.data.overwrite.downstream.new.qop


> java.io.IOException: Invalid token in javax.security.sasl.qop: ^DDI
> ---
>
> Key: HBASE-26007
> URL: https://issues.apache.org/jira/browse/HBASE-26007
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.3.5
>Reporter: Venkat A
>Priority: Blocker
>
> Hi All,
> We have Hadoop 3.2.2 and HBase 2.3.5 Versions installed. (java version 
> "1.8.0_291")
>  
> While bringing up HBase master, I'm seeing following error messages in HBase 
> master log.
>  
> Other HDFS. clients like Spark,MapReduce,Solr etc are able to write HDFS but 
> HBase is unable to write its meta files in HDFS with following exceptions.
>  
> > Summary of Error logs from hbase master 
> 2021-06-15 03:57:45,968 INFO [Thread-7] hdfs.DataStreamer: Exception in 
> createBlockOutputStream
>  java.io.IOException: Invalid token in javax.security.sasl.qop: ^DD
>  2021-06-15 03:57:45,939 WARN [Thread-7] hdfs.DataStreamer: Abandoning 
> BP-1583998547-10.10.10.3-1622148262434:blk_1073743393_2570
>  2021-06-15 03:57:45,946 WARN [Thread-7] hdfs.DataStreamer: Excluding 
> datanode 
> DatanodeInfoWithStorage[10.10.10.3:50010,DS-281c3377-2bc1-47ea-8302-43108ee69430,DISK]
>  2021-06-15 03:57:45,994 WARN [Thread-7] hdfs.DataStreamer: DataStreamer 
> Exception
>  org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /hbase/data/data/hbase/meta/.tmp/.tableinfo.01 could only be written 
> to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 
> node(s) are excluded in this operation.
>  2021-06-15 03:57:46,023 INFO [Thread-9] hdfs.DataStreamer: Exception in 
> createBlockOutputStream
>  java.io.IOException: Invalid token in javax.security.sasl.qop: ^DDI
>  2021-06-15 03:57:46,035 INFO [Thread-9] hdfs.DataStreamer: Exception in 
> createBlockOutputStream
>  java.io.IOException: Invalid token in javax.security.sasl.qop: ^DD
>  2021-06-15 03:57:46,508 ERROR [main] regionserver.HRegionServer: Failed 
> construction RegionServer
>  java.io.IOException: Failed update hbase:meta table descriptor
>  2021-06-15 03:57:46,509 ERROR [main] master.HMasterCommandLine: Master 
> exiting
>  java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster.
>  Caused by: java.io.IOException: Failed update hbase:meta table descriptor
>  
> Not sure what is the root cause behind this. Any comments/suggestions on this 
> is much appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26007) java.io.IOException: Invalid token in javax.security.sasl.qop: ^DDI

2021-06-14 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17363369#comment-17363369
 ] 

Wei-Chiu Chuang commented on HBASE-26007:
-

You probably had an invalid configuration value in hdfs-site.xml.

Because there's no full stack trace, it's hard to know exactly which config is 
wrong, but try search "^DDI" in your config file.

dfs.encrypt.data.overwrite.downstream.new.qop


> java.io.IOException: Invalid token in javax.security.sasl.qop: ^DDI
> ---
>
> Key: HBASE-26007
> URL: https://issues.apache.org/jira/browse/HBASE-26007
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.3.5
>Reporter: Venkat A
>Priority: Blocker
>
> Hi All,
> We have Hadoop 3.2.2 and HBase 2.3.5 Versions installed. (java version 
> "1.8.0_291")
>  
> While bringing up HBase master, I'm seeing following error messages in HBase 
> master log.
>  
> Other HDFS. clients like Spark,MapReduce,Solr etc are able to write HDFS but 
> HBase is unable to write its meta files in HDFS with following exceptions.
>  
> > Summary of Error logs from hbase master 
> 2021-06-15 03:57:45,968 INFO [Thread-7] hdfs.DataStreamer: Exception in 
> createBlockOutputStream
>  java.io.IOException: Invalid token in javax.security.sasl.qop: ^DD
>  2021-06-15 03:57:45,939 WARN [Thread-7] hdfs.DataStreamer: Abandoning 
> BP-1583998547-10.10.10.3-1622148262434:blk_1073743393_2570
>  2021-06-15 03:57:45,946 WARN [Thread-7] hdfs.DataStreamer: Excluding 
> datanode 
> DatanodeInfoWithStorage[10.10.10.3:50010,DS-281c3377-2bc1-47ea-8302-43108ee69430,DISK]
>  2021-06-15 03:57:45,994 WARN [Thread-7] hdfs.DataStreamer: DataStreamer 
> Exception
>  org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /hbase/data/data/hbase/meta/.tmp/.tableinfo.01 could only be written 
> to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 
> node(s) are excluded in this operation.
>  2021-06-15 03:57:46,023 INFO [Thread-9] hdfs.DataStreamer: Exception in 
> createBlockOutputStream
>  java.io.IOException: Invalid token in javax.security.sasl.qop: ^DDI
>  2021-06-15 03:57:46,035 INFO [Thread-9] hdfs.DataStreamer: Exception in 
> createBlockOutputStream
>  java.io.IOException: Invalid token in javax.security.sasl.qop: ^DD
>  2021-06-15 03:57:46,508 ERROR [main] regionserver.HRegionServer: Failed 
> construction RegionServer
>  java.io.IOException: Failed update hbase:meta table descriptor
>  2021-06-15 03:57:46,509 ERROR [main] master.HMasterCommandLine: Master 
> exiting
>  java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster.
>  Caused by: java.io.IOException: Failed update hbase:meta table descriptor
>  
> Not sure what is the root cause behind this. Any comments/suggestions on this 
> is much appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25920) Support Hadoop 3.3.1

2021-05-31 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17354751#comment-17354751
 ] 

Wei-Chiu Chuang commented on HBASE-25920:
-

bq. Wei-Chiu Chuang yeah these looks like changes to transitive dependencies. 
You're reverting all these for a patch release?
BouncyCastle was committed in 3.3.1 only so no impact.

The lz4 and snappy codec change is a little too much for a patch release imo

> Support Hadoop 3.3.1
> 
>
> Key: HBASE-25920
> URL: https://issues.apache.org/jira/browse/HBASE-25920
> Project: HBase
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0
>
>
> The Hadoop 3.3.1 is a big release, quite different from 3.3.0.
> File this jira to track the support for Hadoop 3.3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-25920) Support Hadoop 3.3.1

2021-05-28 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17353418#comment-17353418
 ] 

Wei-Chiu Chuang commented on HBASE-25920:
-

For TestBackupSmallTests, we will revert the BouncyCastle change in Hadoop 
3.3.1 RC2. So that's not needed now.

> Support Hadoop 3.3.1
> 
>
> Key: HBASE-25920
> URL: https://issues.apache.org/jira/browse/HBASE-25920
> Project: HBase
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> The Hadoop 3.3.1 is a big release, quite different from 3.3.0.
> File this jira to track the support for Hadoop 3.3.1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   >