[
https://issues.apache.org/jira/browse/HADOOP-18802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17763158#comment-17763158
]
ASF GitHub Bot commented on HADOOP-18802:
-----------------------------------------
hadoop-yetus commented on PR #6040:
URL: https://github.com/apache/hadoop/pull/6040#issuecomment-1711937408
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 31s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 0s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available.
|
| +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include
any new or modified tests. Please justify why no new tests are needed for this
patch. Also please list what manual steps were performed to verify this patch.
|
|||| _ trunk Compile Tests _ |
| +0 :ok: | mvndep | 13m 33s | | Maven dependency ordering for branch |
| +1 :green_heart: | mvninstall | 20m 6s | | trunk passed |
| +1 :green_heart: | compile | 10m 45s | | trunk passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | compile | 9m 47s | | trunk passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | checkstyle | 2m 24s | | trunk passed |
| +1 :green_heart: | mvnsite | 2m 19s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 56s | | trunk passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | javadoc | 2m 7s | | trunk passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | spotbugs | 3m 46s | | trunk passed |
| +1 :green_heart: | shadedclient | 23m 3s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch |
| +1 :green_heart: | mvninstall | 1m 21s | | the patch passed |
| +1 :green_heart: | compile | 9m 43s | | the patch passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | javac | 9m 43s | | the patch passed |
| +1 :green_heart: | compile | 9m 33s | | the patch passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | javac | 9m 33s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 2m 22s | | the patch passed |
| +1 :green_heart: | mvnsite | 2m 18s | | the patch passed |
| +1 :green_heart: | javadoc | 1m 52s | | the patch passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | javadoc | 2m 6s | | the patch passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | spotbugs | 3m 58s | | the patch passed |
| +1 :green_heart: | shadedclient | 22m 57s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 16m 31s | | hadoop-common in the patch
passed. |
| +1 :green_heart: | unit | 195m 8s | | hadoop-hdfs in the patch
passed. |
| +1 :green_heart: | asflicense | 0m 59s | | The patch does not
generate ASF License warnings. |
| | | 362m 45s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.43 ServerAPI=1.43 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6040/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/6040 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
|
| uname | Linux 3bdccc0a32d5 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 55444f8f2f27eaf1b09e5331b98630133b525eba |
| Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6040/1/testReport/ |
| Max. process+thread count | 3145 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6040/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> Collision of config key name fs.viewfs.mounttable.default.name.key to other
> keys that specify the entry point to mount tables
> -----------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-18802
> URL: https://issues.apache.org/jira/browse/HADOOP-18802
> Project: Hadoop Common
> Issue Type: Bug
> Components: common, conf, fs
> Reporter: ConfX
> Priority: Critical
> Attachments: reproduce.sh
>
>
> h2. What happened:
> When manually set fs.viewfs.mounttable.default.name.key to default (the same
> as default value) in HCommon, test
> org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs#testGlobStatusWithMultipleWildCardMatches
> would fail.
> But the test can pass if this parameter is not manually set in the
> configuration file.
> h2. Where's the bug:
> In the constructor of InodeTree, the tree attempts to get all the mount table
> entry points set by user in the configuration and process them one by one:
> {code:java}
> for (Entry<String, String> si : config) {
> final String key = si.getKey();
> if (!key.startsWith(mountTablePrefix)) {
> continue;
> }
>
> gotMountTableEntry = true;
> LinkType linkType;
> String src = key.substring(mountTablePrefix.length());
> ...
> {code}
> Here mountTablePrefix="fs.viewfs.mounttable.default.". However, it just so
> happens that the name of the configuration users use to specify the default
> mount table is fs.viewfs.mounttable.default.name.key. Thus, if a user
> specifies the default mount table and uses InodeTree the name.key would be
> falsely parsed as the entry point to one of the mount tables, which would
> cause InodeTree to throw an exception since name.key is not a valid entry.
> h2. Stack trace:
> {code:java}
> java.lang.RuntimeException: java.io.IOException: ViewFs: Cannot initialize:
> Invalid entry in Mount table in config: name.key
> at
> org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:470)
> at
> org.apache.hadoop.fs.viewfs.ViewFsTestSetup.setupForViewFsLocalFs(ViewFsTestSetup.java:88)
> at
> org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs.setUp(TestFcMainOperationsLocalFs.java:38){code}
> h2. How to reproduce:
> (1) Set fs.viewfs.mounttable.default.name.key to default
> (2) Run test
> org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs#testGlobStatusWithMultipleWildCardMatches
> You can use the reproduce.sh in the attachment to easily reproduce the bug.
> We are happy to provide a patch if this issue is confirmed.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]