[ 
https://issues.apache.org/jira/browse/HDFS-17680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18014781#comment-18014781
 ] 

ASF GitHub Bot commented on HDFS-17680:
---------------------------------------

hadoop-yetus commented on PR #7884:
URL: https://github.com/apache/hadoop/pull/7884#issuecomment-3199115941

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  26m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 29s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 8 unchanged - 
0 fixed = 9 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 118m 18s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 201m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.51 ServerAPI=1.51 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux af7c0aa3b091 5.15.0-143-generic #153-Ubuntu SMP Fri Jun 13 
19:10:45 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ac9f47c5e3eceede02ed922cc6a85ccff751a714 |
   | Default Java | Private Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.27+6-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_452-8u452-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/1/testReport/ |
   | Max. process+thread count | 3621 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7884/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> HDFS ui in the datanodes doesn't redirect to https when dfs.http.policy is 
> HTTPS_ONLY
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-17680
>                 URL: https://issues.apache.org/jira/browse/HDFS-17680
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, ui
>    Affects Versions: 3.4.1
>            Reporter: Luis Pigueiras
>            Priority: Minor
>              Labels: pull-request-available
>
> _(I'm not sure if I should put it in HDFS or in HADOOP, feel free to move it 
> if it's not the correct place)_
> We have noticed that with having a https_only configuration when accessing 
> the datanodes from the namenode UI, there is a wrong redirection when 
> clicking on the link of the datanodes.
> If you visit in the hdfs UI of a namenode:
> https://<node>:50070/ -> datanodes -> click on the datanode you get 
> redirected from https://<node>:9865 to http://<node>:9865. The 302 should 
> redirect to https and not to http. If you do a curl to the link that is 
> exposed on the website, you get the redirected to the wrong place.
> {code}
> curl -k https://testing2475891.example.org:9865 -vvv
> ...
> < HTTP/1.1 302 Found
> < Location: http://testing2475891.example.org:9865/index.html 
> {code}
> This issue is present in our 3.3.6 but it's also present in 3.4.1 because I 
> managed to reproduce it with the following steps:
>  - Download latest version (binary from: 
> [https://hadoop.apache.org/releases.html] -> 3.4.1)
>  - Uncompress the binaries:
> {code:java}
> tar -xvf hadoop-3.4.1.tar.gz
> cd hadoop-3.4.1
> {code}
>  - Generate dummy certs for TLS and move them to {{etc/hadoop}}
> {code:java}
> keytool -genkeypair -alias hadoop -keyalg RSA -keystore hadoop.keystore 
> -storepass changeit -validity 365
> keytool -export -alias hadoop -keystore hadoop.keystore -file hadoop.cer 
> -storepass changeit
> keytool -import -alias hadoop -file hadoop.cer -keystore hadoop.truststore 
> -storepass changeit -noprompt
> cp hadoop.* etc/hadoop
> {code}
>  - Add this to {{etc/hadoop/hadoop-env.sh}}
> {code:java}
> export JAVA_HOME=/usr/lib/jvm/java-11-openjdk
> export HDFS_NAMENODE_USER=root
> export HDFS_DATANODE_USER=root
> export HDFS_SECONDARYNAMENODE_USER=root
> {code}
>  - Create a etc/hadoop/ssl-server.xml with:
> {code:java}
> <configuration>
> <property>
>   <name>ssl.server.truststore.location</name>
>   <value>/root/hadoop/hadoop-3.4.1/etc/hadoop/hadoop.truststore</value>
>   <description>Truststore to be used by NN and DN. Must be specified.
>   </description>
> </property>
> <property>
>   <name>ssl.server.truststore.password</name>
>   <value>changeit</value>
>   <description>Optional. Default value is "".
>   </description>
> </property>
> <property>
>   <name>ssl.server.truststore.type</name>
>   <value>jks</value>
>   <description>Optional. The keystore file format, default value is "jks".
>   </description>
> </property>
> <property>
>   <name>ssl.server.truststore.reload.interval</name>
>   <value>10000</value>
>   <description>Truststore reload check interval, in milliseconds.
>   Default value is 10000 (10 seconds).
>   </description>
> </property>
> <property>
>   <name>ssl.server.keystore.location</name>
>   <value>/root/hadoop/hadoop-3.4.1/etc/hadoop/hadoop.keystore</value>
>   <description>Keystore to be used by NN and DN. Must be specified.
>   </description>
> </property>
> <property>
>   <name>ssl.server.keystore.password</name>
>   <value>changeit</value>
>   <description>Must be specified.
>   </description>
> </property>
> <property>
>   <name>ssl.server.keystore.keypassword</name>
>   <value>changeit</value>
>   <description>Must be specified.
>   </description>
> </property>
> <property>
>   <name>ssl.server.keystore.type</name>
>   <value>jks</value>
>   <description>Optional. The keystore file format, default value is "jks".
>   </description>
> </property>
> <property>
>   <name>ssl.server.exclude.cipher.list</name>
>   <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
>   SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
>   SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
>   SSL_RSA_WITH_RC4_128_MD5</value>
>   <description>Optional. The weak security cipher suites that you want 
> excluded
>   from SSL communication.</description>
> </property>
> </configuration>
> {code}
>  - hdfs-site.xml:
> {code:java}
> <?xml version="1.0" encoding="UTF-8"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <!--
>   Licensed under the Apache License, Version 2.0 (the "License");
>   you may not use this file except in compliance with the License.
>   You may obtain a copy of the License at
>     http://www.apache.org/licenses/LICENSE-2.0
>   Unless required by applicable law or agreed to in writing, software
>   distributed under the License is distributed on an "AS IS" BASIS,
>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>   See the License for the specific language governing permissions and
>   limitations under the License. See accompanying LICENSE file.
> -->
> <!-- Put site-specific property overrides in this file. -->
> <configuration>
>         <configuration>
>     <property>
>         <name>dfs.replication</name>
>         <value>1</value>
>     </property>
> </configuration>
> <property>
>   <name>dfs.http.policy</name>
>   <value>HTTPS_ONLY</value>
> </property>
> <property>
>     <name>dfs.https.enable</name>
>     <value>true</value>
> </property>
> <property>
>     <name>dfs.namenode.https-address</name>
>     <value>0.0.0.0:50070</value>
> </property>
> <property>
>   <name>dfs.https.server.keystore.resource</name>
>   <value>ssl-server.xml</value>
> </property>
> </configuration>
> {code}
>  - core-site.xml:
> {code:java}
> <configuration>
>         <configuration>
>     <property>
>         <name>fs.defaultFS</name>
>         <value>hdfs://localhost:9000</value>
>     </property>
> </configuration>
> <property>
>   <name>hadoop.ssl.enabled</name>
>   <value>true</value>
> </property>
> <property>
>   <name>hadoop.ssl.keystores.factory.class&lt;/name>
>   <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>
> </property>
> <property>
>   <name>hadoop.ssl.server.keystore.resource</name>
>   <value>hadoop.keystore</value>
> </property>
> <property>
>   <name>hadoop.ssl.server.keystore.password</name>
>   <value>changeit</value>
> </property>
> <property>
>   <name>hadoop.ssl.server.truststore.resource</name>
>   <value>hadoop.truststore</value>
> </property>
> <property>
>   <name>hadoop.ssl.server.truststore.password</name>
>   <value>changeit</value>
> </property>
> </configuration>
> {code}
>  - Now you can initialize:
> {code:java}
> bin/hdfs namenode -format
> sbin/start-dfs.sh
> {code}
>  - If you visit https://<node>:50070/ -> datanodes -> click on the datanode 
> you get redirected from https://<node>:9865 to http://<node>:9865
> {code}
>  curl -k https://testing2475891.example.org:9865 -vvv
> ...
> < HTTP/1.1 302 Found
> < Location: http://testing2475891.example.org:9865/index.html
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to