[
https://issues.apache.org/jira/browse/HDFS-16953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17700931#comment-17700931
]
ASF GitHub Bot commented on HDFS-16953:
---------------------------------------
hadoop-yetus commented on PR #5482:
URL: https://github.com/apache/hadoop/pull/5482#issuecomment-1471189213
:broken_heart: **-1 overall**
| Vote | Subsystem | Runtime | Logfile | Comment |
|:----:|----------:|--------:|:--------:|:-------:|
| +0 :ok: | reexec | 1m 29s | | Docker mode activated. |
|||| _ Prechecks _ |
| +1 :green_heart: | dupname | 0m 0s | | No case conflicting files
found. |
| +0 :ok: | codespell | 0m 1s | | codespell was not available. |
| +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available.
|
| +1 :green_heart: | @author | 0m 0s | | The patch does not contain
any @author tags. |
| -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include
any new or modified tests. Please justify why no new tests are needed for this
patch. Also please list what manual steps were performed to verify this patch.
|
|||| _ trunk Compile Tests _ |
| +1 :green_heart: | mvninstall | 42m 33s | | trunk passed |
| +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | checkstyle | 0m 29s | | trunk passed |
| +1 :green_heart: | mvnsite | 0m 42s | | trunk passed |
| +1 :green_heart: | javadoc | 0m 48s | | trunk passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | spotbugs | 1m 34s | | trunk passed |
| +1 :green_heart: | shadedclient | 23m 49s | | branch has no errors
when building and testing our client artifacts. |
|||| _ Patch Compile Tests _ |
| +1 :green_heart: | mvninstall | 0m 34s | | the patch passed |
| +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javac | 0m 37s | | the patch passed |
| +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | javac | 0m 30s | | the patch passed |
| +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks
issues. |
| +1 :green_heart: | checkstyle | 0m 17s | | the patch passed |
| +1 :green_heart: | mvnsite | 0m 34s | | the patch passed |
| +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 |
| +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| +1 :green_heart: | spotbugs | 1m 21s | | the patch passed |
| +1 :green_heart: | shadedclient | 23m 39s | | patch has no errors
when building and testing our client artifacts. |
|||| _ Other Tests _ |
| +1 :green_heart: | unit | 21m 22s | | hadoop-hdfs-rbf in the patch
passed. |
| +1 :green_heart: | asflicense | 0m 33s | | The patch does not
generate ASF License warnings. |
| | | 126m 5s | | |
| Subsystem | Report/Notes |
|----------:|:-------------|
| Docker | ClientAPI=1.42 ServerAPI=1.42 base:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5482/1/artifact/out/Dockerfile
|
| GITHUB PR | https://github.com/apache/hadoop/pull/5482 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
| uname | Linux 5883201056f8 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/bin/hadoop.sh |
| git revision | trunk / b824fa7eb4ad7c6ffdfacbaba7551a4403b17855 |
| Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
| Test Results |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5482/1/testReport/ |
| Max. process+thread count | 2640 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U:
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output |
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5482/1/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
This message was automatically generated.
> RBF: Mount table store APIs should update cache only if state store record is
> successfully updated
> --------------------------------------------------------------------------------------------------
>
> Key: HDFS-16953
> URL: https://issues.apache.org/jira/browse/HDFS-16953
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Viraj Jasani
> Assignee: Viraj Jasani
> Priority: Major
> Labels: pull-request-available
>
> RBF Mount table state store APIs addMountTableEntry, updateMountTableEntry
> and removeMountTableEntry performs cache refresh for all routers regardless
> of the actual record update result. If the record fails to get updated on
> zookeeper/file based store impl, reloading the cache for all routers would be
> unnecessary.
>
> For instance, simultaneously adding new mount point could lead to failure for
> the second call if first call has not added new entry by the time second call
> retrieves mount table entry from getMountTableEntries before attempting to
> call addMountTableEntry.
> {code:java}
> DEBUG [{cluster}/{ip}:8111] ipc.Client - IPC Client (1826699684) connection
> to nn-0-{ns}.{cluster}/{ip}:8111 from {user}IPC Client (1826699684)
> connection to nn-0-{ns}.{cluster}/{ip}:8111 from {user} sending #1
> org.apache.hadoop.hdfs.protocolPB.RouterAdminProtocol.addMountTableEntry
> DEBUG [{cluster}/{ip}:8111 from {user}] ipc.Client - IPC Client (1826699684)
> connection to nn-0-{ns}.{cluster}/{ip}:8111 from {user} got value #1
> DEBUG [main] ipc.ProtobufRpcEngine2 - Call: addMountTableEntry took 24ms
> DEBUG [{cluster}/{ip}:8111 from {user}] ipc.Client - IPC Client (1826699684)
> connection to nn-0-{ns}.{cluster}/{ip}:8111 from {user}: closed
> DEBUG [{cluster}/{ip}:8111 from {user}] ipc.Client - IPC Client (1826699684)
> connection to nn-0-{ns}.{cluster}/{ip}:8111 from {user}: stopped, remaining
> connections 0
> TRACE [main] ipc.ProtobufRpcEngine2 - 1: Response <-
> nn-0-{ns}.{cluster}/{ip}:8111: addMountTableEntry {status: false}
> Cannot add mount point /data503 {code}
> The failure to write new record:
> {code:java}
> INFO [IPC Server handler 0 on default port 8111]
> impl.StateStoreZooKeeperImpl - Cannot write record
> "/hdfs-federation/MountTable/0SLASH0data503", it already exists {code}
> Since the successful call has already refreshed cache for all routers, second
> call that failed should not have refreshed cache for all routers again as
> everyone already has updated records in cache.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]