[jira] [Commented] (HDFS-17146) Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes.

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814408#comment-17814408
 ] 

ASF GitHub Bot commented on HDFS-17146:
---

hadoop-yetus commented on PR #6504:
URL: https://github.com/apache/hadoop/pull/6504#issuecomment-1927285846

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 220m 11s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 354m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6504 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 4e2d111fc67d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 33ff6f18f5e52802cb3b5d03df192c6179d76bec |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/6/testReport/ |
   | Max. process+thread count | 3435 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   





[jira] [Assigned] (HDFS-17372) CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high priority command blocked by low priority command

2024-02-05 Thread farmmamba (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

farmmamba reassigned HDFS-17372:


Assignee: farmmamba

> CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high 
> priority command blocked by low priority command
> -
>
> Key: HDFS-17372
> URL: https://issues.apache.org/jira/browse/HDFS-17372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17372) CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high priority command blocked by low priority command

2024-02-05 Thread farmmamba (Jira)
farmmamba created HDFS-17372:


 Summary: CommandProcessingThread#queue should use 
LinkedBlockingDeque to prevent high priority command blocked by low priority 
command
 Key: HDFS-17372
 URL: https://issues.apache.org/jira/browse/HDFS-17372
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: farmmamba






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17316) Compatibility Benchmark over HCFS Implementations

2024-02-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814392#comment-17814392
 ] 

Steve Loughran commented on HDFS-17316:
---

bq.  However, only one FS instance can be supported per run. This is because 
the benchmark should be simple and fast.

that's fine; what I would like is be able to identify a profile to use and have 
all the settings picked up there. for the s3a test suites i can choose which 
store to run against, but the other options (storage class, fips, aws roles, 
...) have to be configured too. Switching from one store to another requires me 
to comment out the ones not available. If we could be profile driven from the 
start, i'd do a run -Dprofile=google vs -Dprofile=aws-s3express and have the 
relevant profiles picked up with my configs for each of them

> Compatibility Benchmark over HCFS Implementations
> -
>
> Key: HDFS-17316
> URL: https://issues.apache.org/jira/browse/HDFS-17316
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Han Liu
>Priority: Major
> Attachments: HDFS Compatibility Benchmark Design.pdf
>
>
> {*}Background:{*}Hadoop-Compatible File System (HCFS) is a core conception in 
> big data storage ecosystem, providing unified interfaces and generally clear 
> semantics, and has become the de-factor standard for industry storage systems 
> to follow and conform with. There have been a series of HCFS implementations 
> in Hadoop, such as S3AFileSystem for Amazon's S3 Object Store, WASB for 
> Microsoft's Azure Blob Storage and OSS connector for Alibaba Cloud Object 
> Storage, and more from storage service's providers on their own.
> {*}Problems:{*}However, as indicated by introduction.md, there is no formal 
> suite to do compatibility assessment of a file system for all such HCFS 
> implementations. Thus, whether the functionality is well accomplished and 
> meets the core compatible expectations mainly relies on service provider's 
> own report. Meanwhile, Hadoop is also developing and new features are 
> continuously contributing to HCFS interfaces for existing implementations to 
> follow and update, in which case, Hadoop also needs a tool to quickly assess 
> if these features are supported or not for a specific HCFS implementation. 
> Besides, the known hadoop command line tool or hdfs shell is used to directly 
> interact with a HCFS storage system, where most commands correspond to 
> specific HCFS interfaces and work well. Still, there are cases that are 
> complicated and may not work, like expunge command. To check such commands 
> for an HCFS, we also need an approach to figure them out.
> {*}Proposal:{*}Accordingly, we propose to define a formal HCFS compatibility 
> benchmark and provide corresponding tool to do the compatibility assessment 
> for an HCFS storage system. The benchmark and tool should consider both HCFS 
> interfaces and hdfs shell commands. Different scenarios require different 
> kinds of compatibilities. For such consideration, we could define different 
> suites in the benchmark.
> *Benefits:* We intend the benchmark and tool to be useful for both storage 
> providers and storage users. For end users, it can be used to evalute the 
> compatibility level and determine if the storage system in question is 
> suitable for the required scenarios. For storage providers, it helps to 
> quickly generate an objective and reliable report about core functioins of 
> the storage service. As an instance, if the HCFS got a 100% on a suite named 
> 'tpcds', it is demonstrated that all functions needed by a tpcds program have 
> been well achieved. It is also a guide indicating how storage service 
> abilities can map to HCFS interfaces, such as storage class on S3.
> Any thoughts? Comments and feedback are mostly welcomed. Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17299) HDFS is not rack failure tolerant while creating a new file.

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814535#comment-17814535
 ] 

ASF GitHub Bot commented on HDFS-17299:
---

shahrs87 commented on code in PR #6513:
URL: https://github.com/apache/hadoop/pull/6513#discussion_r1479041865


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java:
##
@@ -1607,8 +1607,11 @@ private void transfer(final DatanodeInfo src, final 
DatanodeInfo[] targets,
* it can be written to.
* This happens when a file is appended or data streaming fails
* It keeps on trying until a pipeline is setup
+   *
+   * Returns boolean whether pipeline was setup successfully or not.
+   * This boolean is used upstream on whether to continue creating pipeline or 
throw exception
*/
-  private void setupPipelineForAppendOrRecovery() throws IOException {
+  private boolean setupPipelineForAppendOrRecovery() throws IOException {

Review Comment:
   We are changing the return type of `setupPipelineForAppendOrRecovery` and 
`setupPipelineInternal` methods.
   IIRC this is the reason: `handleBadDatanode` can silently fail to handle bad 
datanode 
[here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1700-L1706)
 and `setupPipelineInternal` will silently return 
[here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1637-L1638)
 without bubbling up the exception. 
   



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java:
##
@@ -1618,24 +1621,33 @@ private void setupPipelineForAppendOrRecovery() throws 
IOException {
   LOG.warn(msg);
   lastException.set(new IOException(msg));
   streamerClosed = true;
-  return;
+  return false;
 }
-setupPipelineInternal(nodes, storageTypes, storageIDs);
+return setupPipelineInternal(nodes, storageTypes, storageIDs);
   }
 
-  protected void setupPipelineInternal(DatanodeInfo[] datanodes,
+  protected boolean setupPipelineInternal(DatanodeInfo[] datanodes,
   StorageType[] nodeStorageTypes, String[] nodeStorageIDs)
   throws IOException {
 boolean success = false;
 long newGS = 0L;
+boolean isCreateStage = BlockConstructionStage.PIPELINE_SETUP_CREATE == 
stage;
 while (!success && !streamerClosed && dfsClient.clientRunning) {
   if (!handleRestartingDatanode()) {
-return;
+return false;
+  }
+
+  final boolean isRecovery = errorState.hasInternalError() && 
!isCreateStage;
+
+  // During create stage, if we remove a node (nodes.length - 1)
+  //  min replication should still be satisfied.
+  if (isCreateStage && !(dfsClient.dtpReplaceDatanodeOnFailureReplication 
> 0 &&

Review Comment:
   Reason behind adding this check here:
   We are already doing this check in catch block of 
`addDatanode2ExistingPipeline` method 
[here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1528-L1539).
   But when `isAppend` flag is set to `false` and we are in 
`PIPELINE_SETUP_CREATE` phase, we exit early from 
`addDatanode2ExistingPipeline` method 
[here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1489-L1492)
   
   Lets say the replication factor is 3 and we have set the config property 
`dfs.client.block.write.replace-datanode-on-failure.min-replication` to 3 and 
there is one bad node in the pipeline.  Even if we have set the config property 
to `ReplaceDatanodeOnFailure.CONDITION_TRUE`, the code will exit the 
addDatanode2ExistingPipeline method early since `isAppend` is set to false and 
stage is `PIPELINE_SETUP_CREATE`. Assuming that there are NO available nodes in 
the rack, the pipeline will succeed with 2 nodes in the pipeline which will 
violate the config property: 
`dfs.client.block.write.replace-datanode-on-failure.min-replication`
   
   Having written all of these, I realized that even if there are some good 
nodes available in the rack, we will exit early after this patch. Should we 
move this check after `handleDatanodeReplacement` method?  @ritegarg 





> HDFS is not rack failure tolerant while creating a new file.
> 
>
> Key: HDFS-17299
> URL: https://issues.apache.org/jira/browse/HDFS-17299
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.1
>Reporter: Rushabh Shah
>Assignee: Ritesh
>Priority: Critical
>  Labels: pull-request-available
> Attachments: repro.patch
>
>
> 

[jira] [Commented] (HDFS-17181) WebHDFS not considering whether a DN is good when called from outside the cluster

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814507#comment-17814507
 ] 

ASF GitHub Bot commented on HDFS-17181:
---

lfrancke commented on PR #6108:
URL: https://github.com/apache/hadoop/pull/6108#issuecomment-1928029263

   This has now been running in production since September without a problem.
   
   I'll update the branch.




> WebHDFS not considering whether a DN is good when called from outside the 
> cluster
> -
>
> Key: HDFS-17181
> URL: https://issues.apache.org/jira/browse/HDFS-17181
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, webhdfs
>Affects Versions: 3.3.6
>Reporter: Lars Francke
>Priority: Major
>  Labels: pull-request-available
> Attachments: Test_fix_for_HDFS-171811.patch
>
>
> When calling WebHDFS to create a file (I'm sure the same problem occurs for 
> other actions e.g. OPEN but I haven't checked all of them yet) it will 
> happily redirect to nodes that are in maintenance.
> The reason is in the 
> [{{chooseDatanode}}|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java#L307C9-L315]
>  method in {{NamenodeWebHdfsMethods}} where it will only call the 
> {{BlockPlacementPolicy}} (which considers all these edge cases) in case the 
> {{remoteAddr}} (i.e. the address making the request to WebHDFS) is also 
> running a DataNode.
>  
> In all other cases it just refers to 
> [{{NetworkTopology#chooseRandom}}|https://github.com/apache/hadoop/blob/rel/release-3.3.6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java#L342-L343]
>  which does not consider any of these circumstances (e.g. load, maintenance).
> I don't understand the reason to not just always refer to the placement 
> policy and we're currently testing a patch to do just that.
> I have attached a draft patch for now.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17362) RBF: RouterObserverReadProxyProvider should use ConfiguredFailoverProxyProvider internally

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814518#comment-17814518
 ] 

ASF GitHub Bot commented on HDFS-17362:
---

simbadzina commented on code in PR #6510:
URL: https://github.com/apache/hadoop/pull/6510#discussion_r1478942086


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RouterObserverReadProxyProvider.java:
##
@@ -84,7 +84,8 @@ public class RouterObserverReadProxyProvider extends 
AbstractNNFailoverProxyP
 
   public RouterObserverReadProxyProvider(Configuration conf, URI uri, Class 
xface,
   HAProxyFactory factory) {
-this(conf, uri, xface, factory, new IPFailoverProxyProvider<>(conf, uri, 
xface, factory));
+this(conf, uri, xface, factory,
+new ConfiguredFailoverProxyProvider<>(conf, uri, xface, factory));

Review Comment:
   Yes, I was suggesting adding a new parameter, like 
`dfs.client.failover.router.internal.proxy.provider` you named.
   
   Creating a new class is also a good solution. I'm a bit worried though about 
the update story for clients who are already using the existing class.
   
   The new parameter approach makes the update backward compatible with 
existing client configs.





> RBF: RouterObserverReadProxyProvider should use 
> ConfiguredFailoverProxyProvider internally
> --
>
> Key: HDFS-17362
> URL: https://issues.apache.org/jira/browse/HDFS-17362
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>
> Currently, RouterObserverReadProxyProvider is using IPFailoverProxyProvider, 
> while ObserverReadProxyProvider is using ConfiguredFailoverProxyProvider.  If 
> we are to align RouterObserverReadProxyProvider with 
> ObserverReadProxyProvider, RouterObserverReadProxyProvider should internally 
> use ConfiguredFailoverProxyProvider.  Moreover, IPFailoverProxyProvider has 
> an issue with resolving HA configurations. (For example, 
> IPFailoverProxyProvider cannot resolve hdfs://router-service.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17370) Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf

2024-02-05 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-17370:

Fix Version/s: 3.4.1
   3.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf
> ---
>
> Key: HDFS-17370
> URL: https://issues.apache.org/jira/browse/HDFS-17370
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1, 3.5.0
>
>
> We need to add junit-jupiter-engine dependency for running parameterized 
> tests in hadoop-hdfs-rbf.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17362) RBF: RouterObserverReadProxyProvider should use ConfiguredFailoverProxyProvider internally

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814577#comment-17814577
 ] 

ASF GitHub Bot commented on HDFS-17362:
---

tasanuma commented on code in PR #6510:
URL: https://github.com/apache/hadoop/pull/6510#discussion_r1479162410


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RouterObserverReadProxyProvider.java:
##
@@ -84,7 +84,8 @@ public class RouterObserverReadProxyProvider extends 
AbstractNNFailoverProxyP
 
   public RouterObserverReadProxyProvider(Configuration conf, URI uri, Class 
xface,
   HAProxyFactory factory) {
-this(conf, uri, xface, factory, new IPFailoverProxyProvider<>(conf, uri, 
xface, factory));
+this(conf, uri, xface, factory,
+new ConfiguredFailoverProxyProvider<>(conf, uri, xface, factory));

Review Comment:
   I'd prefer to create a new class. I'd like to keep the existing class as is 
and create a new class named 
`RouterObserverReadConfiguredFailoverProxyProvider` using 
`ConfiguredFailoverProxyProvider` internally. I will update the PR soon.





> RBF: RouterObserverReadProxyProvider should use 
> ConfiguredFailoverProxyProvider internally
> --
>
> Key: HDFS-17362
> URL: https://issues.apache.org/jira/browse/HDFS-17362
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>
> Currently, RouterObserverReadProxyProvider is using IPFailoverProxyProvider, 
> while ObserverReadProxyProvider is using ConfiguredFailoverProxyProvider.  If 
> we are to align RouterObserverReadProxyProvider with 
> ObserverReadProxyProvider, RouterObserverReadProxyProvider should internally 
> use ConfiguredFailoverProxyProvider.  Moreover, IPFailoverProxyProvider has 
> an issue with resolving HA configurations. (For example, 
> IPFailoverProxyProvider cannot resolve hdfs://router-service.)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17370) Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814579#comment-17814579
 ] 

ASF GitHub Bot commented on HDFS-17370:
---

tasanuma commented on PR #6522:
URL: https://github.com/apache/hadoop/pull/6522#issuecomment-1928690446

   Thanks for your review, @simbadzina.




> Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf
> ---
>
> Key: HDFS-17370
> URL: https://issues.apache.org/jira/browse/HDFS-17370
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>
> We need to add junit-jupiter-engine dependency for running parameterized 
> tests in hadoop-hdfs-rbf.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17372) CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high priority command blocked by low priority command

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814616#comment-17814616
 ] 

ASF GitHub Bot commented on HDFS-17372:
---

hfutatzhanghb opened a new pull request, #6530:
URL: https://github.com/apache/hadoop/pull/6530

   ### Description of PR
   Refer to HDFS-17372.
   
   
   
   




> CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high 
> priority command blocked by low priority command
> -
>
> Key: HDFS-17372
> URL: https://issues.apache.org/jira/browse/HDFS-17372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17372) CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high priority command blocked by low priority command

2024-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17372:
--
Labels: pull-request-available  (was: )

> CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high 
> priority command blocked by low priority command
> -
>
> Key: HDFS-17372
> URL: https://issues.apache.org/jira/browse/HDFS-17372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17372) CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high priority command blocked by low priority command

2024-02-05 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814630#comment-17814630
 ] 

ASF GitHub Bot commented on HDFS-17372:
---

hfutatzhanghb commented on PR #6530:
URL: https://github.com/apache/hadoop/pull/6530#issuecomment-1928872491

   @Hexiaoqiao @zhangshuyan0 @tasanuma @tomscut Hi, sir. Could you please take 
a look at this problem? If needed, I will post an UT soonly.




> CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high 
> priority command blocked by low priority command
> -
>
> Key: HDFS-17372
> URL: https://issues.apache.org/jira/browse/HDFS-17372
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org