[
https://issues.apache.org/jira/browse/HDFS-16984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17774420#comment-17774420
]
ASF GitHub Bot commented on HDFS-16984:
---------------------------------------
hadoop-yetus commented on PR #6175:
URL: https://github.com/apache/hadoop/pull/6175#issuecomment-1759350531
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 0m 47s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available.
|
| +0 :ok: | buf | 0m 1s | | buf was not available. |
| +0 :ok: | buf | 0m 1s | | buf was not available. |
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include
any new or modified tests. Please justify why no new tests are needed for this
patch. Also please list what manual steps were performed to verify this patch.
|
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 48m 20s | | trunk passed |
| +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | checkstyle | 1m 10s | | trunk passed |
| +1 :green_heart: | mvnsite | 1m 25s | | trunk passed |
| +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | spotbugs | 3m 21s | | trunk passed |
| +1 :green_heart: | shadedclient | 40m 13s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 1m 11s | | the patch passed |
| +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | cc | 1m 16s | | the patch passed |
| +1 :green_heart: | javac | 1m 16s | | the patch passed |
| +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | cc | 1m 8s | | the patch passed |
| +1 :green_heart: | javac | 1m 8s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| -0 :warning: | checkstyle | 1m 1s |
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6175/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 15 unchanged -
0 fixed = 16 total (was 15) |
| +1 :green_heart: | mvnsite | 1m 16s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 |
| +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| +1 :green_heart: | spotbugs | 3m 20s | | the patch passed |
| +1 :green_heart: | shadedclient | 40m 51s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| -1 :x: | unit | 237m 31s |
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6175/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
| hadoop-hdfs in the patch passed. |
| +1 :green_heart: | asflicense | 0m 46s | | The patch does not
generate ASF License warnings. |
| | | 392m 7s | | |
| Reason | Tests |
|-------:|:------|
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.43 ServerAPI=1.43 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6175/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/6175 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint
bufcompat |
| uname | Linux 3f2641282fce 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / 321a01543db9c59c3b8385520c20d886b9b43aa0 |
| Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6175/1/testReport/ |
| Max. process+thread count | 2386 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6175/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> Directory timestamp lost during the upgrade process
> ---------------------------------------------------
>
> Key: HDFS-16984
> URL: https://issues.apache.org/jira/browse/HDFS-16984
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs
> Affects Versions: 2.10.2, 3.3.6
> Reporter: Ke Han
> Priority: Major
> Labels: pull-request-available
> Attachments: GUBIkxOc.tar.gz
>
>
> h1. Symptoms
> The access timestamp for a directory is lost after the upgrading from HDFS
> cluster 2.10.2 to 3.3.6.
> h1. Reproduce
> Start up a four-node HDFS cluster in 2.10.2 version.
> Execute the following commands. (The client is started up in NN, We have
> minimized the command sequence for reproducing)
> {code:java}
> bin/hdfs dfs -mkdir /GUBIkxOc
> bin/hdfs dfs -put -f -p -d /tmp/upfuzz/hdfs/GUBIkxOc/bQfxf /GUBIkxOc/
> bin/hdfs dfs -mkdir /GUBIkxOc/sKbTRjvS{code}
> Perform read in the old version
> {code:java}
> bin/hdfs dfs -ls -t -r -u /GUBIkxOc/
> Found 2 items
> drwxr-xr-x - root supergroup 0 1970-01-01 00:00 /GUBIkxOc/sKbTRjvS
> drwxr-xr-x - 20001 998 0 2023-04-17 16:15
> /GUBIkxOc/bQfxf{code}
> Then perform a full-stop upgrade to upgrade the entire cluster to 3.3.6.
> (Follow upgrade procedure in the website: (1) enter safemode (2) rolling
> upgrade prepare (3) exit from safe mode). When all nodes in new version have
> started up, we perform the same read:
> {code:java}
> Found 2 items
> drwxr-xr-x - 20001 998 0 1970-01-01 00:00 /GUBIkxOc/bQfxf
> drwxr-xr-x - root supergroup 0 1970-01-01 00:00
> /GUBIkxOc/sKbTRjvS {code}
> The access timestamp info of directory /GUBIkxOc/bQfxf is lost. It changes
> from 2023-04-17 16:15 to 1970-01-01 00:00.
> PS: The prepare upgrade must happen after the commands have been executed.
> I have also attached the required file: +/tmp/upfuzz/hdfs/GUBIkxOc/bQfxf+ .
> h1. Root Cause
> When creating the FSImage, the access time field is not persisted.
> If users perform an upgrade without creating the FSImage, this bug won't
> happen because access time is stored in the Edit Log. However, once FSImage
> is created, all the edit logs before the snapshot will be invalidated. When
> the new version system starts up, it only reconstructs the in-memory file
> system from the FSImage and ignores those edit logs.
> We should make sure the access time of the directory is also properly
> persisted, just as files. I have submitted a PR for a fix.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]