[jira] [Created] (HDFS-11846) Ozone: Potential http connection leaks in ozone clients
Weiwei Yang created HDFS-11846: -- Summary: Ozone: Potential http connection leaks in ozone clients Key: HDFS-11846 URL: https://issues.apache.org/jira/browse/HDFS-11846 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Assignee: Weiwei Yang There are some http clients in {{OzoneVolume}}, {{OzoneBucket}} and {{OzoneClient}} are not properly closed which would leak resource leaks. This jira's purpose is to fix these issues and investigate if we can reuse some of http connections for better performance. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-2538) option to disable fsck dots
[ https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Inigo Goiri reopened HDFS-2538: --- Backporting to 2.7.4. > option to disable fsck dots > > > Key: HDFS-2538 > URL: https://issues.apache.org/jira/browse/HDFS-2538 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 2.2.0 >Reporter: Allen Wittenauer >Assignee: Mohammad Kamrul Islam >Priority: Minor > Labels: newbie, release-blocker > Fix For: 3.0.0-alpha1 > > Attachments: HDFS-2538.1.patch, HDFS-2538.2.patch, HDFS-2538.3.patch, > HDFS-2538-branch-0.20-security-204.patch, > HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch > > > this patch turns the dots during fsck off by default and provides an option > to turn them back on if you have a fetish for millions and millions of dots > on your terminal. i haven't done any benchmarks, but i suspect fsck is now > 300% faster to boot. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Jenkins test failure
Thanks a lot Akira! Good catch! --Yongjun On Wed, May 17, 2017 at 3:46 PM, Akira Ajisakawrote: > Hi Yongjun, > > Jenkins selects the latest attachment for precommit job regardless of the > type of the attachment. > > The workaround is to attach the patch again. > > Regards, > Akira > > On 2017/05/17 18:38, Yongjun Zhang wrote: > >> Hi, >> >> I saw quite a few jenkins test failure for patches uploaded to jira, >> >> For example, >> https://builds.apache.org/job/PreCommit-HADOOP-Build/12347/console >> >> apache-yetus-2971eff/yetus-project/pom.xml >> Modes: Sentinel MultiJDK Jenkins Robot Docker ResetRepo UnitTests >> Processing: HADOOP-14407 >> HADOOP-14407 patch is being downloaded at Wed May 17 19:38:33 UTC 2017 >> from >> https://issues.apache.org/jira/secure/attachment/12868556/ >> TotalTime-vs-CopyBufferSize.jpg >> -> Downloaded >> ERROR: Unsure how to process HADOOP-14407. >> >> >> Wonder if anyone can help? >> >> >> Thanks. >> >> >> --Yongjun >> >> > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Re: Jenkins test failure
Hi Yongjun, Jenkins selects the latest attachment for precommit job regardless of the type of the attachment. The workaround is to attach the patch again. Regards, Akira On 2017/05/17 18:38, Yongjun Zhang wrote: Hi, I saw quite a few jenkins test failure for patches uploaded to jira, For example, https://builds.apache.org/job/PreCommit-HADOOP-Build/12347/console apache-yetus-2971eff/yetus-project/pom.xml Modes: Sentinel MultiJDK Jenkins Robot Docker ResetRepo UnitTests Processing: HADOOP-14407 HADOOP-14407 patch is being downloaded at Wed May 17 19:38:33 UTC 2017 from https://issues.apache.org/jira/secure/attachment/12868556/TotalTime-vs-CopyBufferSize.jpg -> Downloaded ERROR: Unsure how to process HADOOP-14407. Wonder if anyone can help? Thanks. --Yongjun - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Jenkins test failure
Hi, I saw quite a few jenkins test failure for patches uploaded to jira, For example, https://builds.apache.org/job/PreCommit-HADOOP-Build/12347/console apache-yetus-2971eff/yetus-project/pom.xml Modes: Sentinel MultiJDK Jenkins Robot Docker ResetRepo UnitTests Processing: HADOOP-14407 HADOOP-14407 patch is being downloaded at Wed May 17 19:38:33 UTC 2017 from https://issues.apache.org/jira/secure/attachment/12868556/TotalTime-vs-CopyBufferSize.jpg -> Downloaded ERROR: Unsure how to process HADOOP-14407. Wonder if anyone can help? Thanks. --Yongjun
[jira] [Created] (HDFS-11845) Ozone: Output error when DN handshakes with SCM
Weiwei Yang created HDFS-11845: -- Summary: Ozone: Output error when DN handshakes with SCM Key: HDFS-11845 URL: https://issues.apache.org/jira/browse/HDFS-11845 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Weiwei Yang Assignee: Weiwei Yang Priority: Minor When start SCM and DN, there is always an error in SCM log {noformat} 17/05/17 15:19:59 WARN ipc.Server: IPC Server handler 9 on 9861, call Call#4 Retry#0 org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol.getVersion from 172.16.165.133:44824: output error 17/05/17 15:19:59 INFO ipc.Server: IPC Server handler 9 on 9861 caught an exception java.nio.channels.ClosedChannelException at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461) at org.apache.hadoop.ipc.Server.channelWrite(Server.java:3216) at org.apache.hadoop.ipc.Server.access$1600(Server.java:135) at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1463) at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1533) at org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2581) at org.apache.hadoop.ipc.Server$Connection.access$300(Server.java:1605) at org.apache.hadoop.ipc.Server$RpcCall.doResponse(Server.java:931) at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:765) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659) {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11844) Ozone: Recover SCM state when SCM is restarted
Weiwei Yang created HDFS-11844: -- Summary: Ozone: Recover SCM state when SCM is restarted Key: HDFS-11844 URL: https://issues.apache.org/jira/browse/HDFS-11844 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone, scm Reporter: Weiwei Yang Assignee: Weiwei Yang SCM losses its state once being restarted. A simple test can be done by following steps # Start NN, DN, SCM # Create several containers via SCM CLI # Restart SCM # Get existing container state via SCM CLI, this step will fail with container doesn't exist error. {{ContainerManagerImpl}} maintains a cache of container mapping {{containerMap}}, if SCM is restarted, this information is lost. We need a way to restore the state from DB in a background thread. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11843) Ozone: XceiverClientRatis should implement XceiverClientSpi.connect()
Tsz Wo Nicholas Sze created HDFS-11843: -- Summary: Ozone: XceiverClientRatis should implement XceiverClientSpi.connect() Key: HDFS-11843 URL: https://issues.apache.org/jira/browse/HDFS-11843 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Tsz Wo Nicholas Sze Assignee: Tsz Wo Nicholas Sze When a XceiverClientRatis object is newly created, it automatically connect to the server. This is not a correct behavior. It should implement XceiverClientSpi.connect(). -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11842) TestDataNodeOutlierDetectionViaMetrics UT fails
Yesha Vora created HDFS-11842: - Summary: TestDataNodeOutlierDetectionViaMetrics UT fails Key: HDFS-11842 URL: https://issues.apache.org/jira/browse/HDFS-11842 Project: Hadoop HDFS Issue Type: Bug Reporter: Yesha Vora TestDataNodeOutlierDetectionViaMetrics UT fails as below. {code} Failed tests: TestDataNodeOutlierDetectionViaMetrics.testOutlierIsDetected:86 Expected: is <1> but: was <0> Tests run: 300, Failures: 1, Errors: 0, Skipped: 0 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Common ... SUCCESS [ 11.586 s] [INFO] Apache Hadoop HDFS . FAILURE [24:16 min] [INFO] [INFO] BUILD FAILURE [INFO] {code} {code} Error Message Expected: is <1> but: was <0> Stacktrace java.lang.AssertionError: Expected: is <1> but: was <0> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.junit.Assert.assertThat(Assert.java:865) at org.junit.Assert.assertThat(Assert.java:832) at org.apache.hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics.testOutlierIsDetected(TestDataNodeOutlierDetectionViaMetrics.java:86) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/317/ [May 16, 2017 2:26:44 PM] (jlowe) YARN-6603. NPE in RMAppsBlock. Contributed by Jason Lowe [May 16, 2017 3:52:55 PM] (aajisaka) HDFS-11833. HDFS architecture documentation descibes outdated placement [May 16, 2017 4:28:46 PM] (kihwal) HDFS-11641. Reduce cost of audit logging by using FileStatus instead of [May 16, 2017 4:41:59 PM] (aajisaka) HDFS-11696. Fix warnings from Spotbugs in hadoop-hdfs. Contributed by [May 16, 2017 5:48:46 PM] (jianhe) YARN-6306. NMClient API change for container upgrade. Contributed by [May 16, 2017 6:22:32 PM] (liuml07) HADOOP-14416. Path starting with 'wasb:///' not resolved correctly while [May 17, 2017 12:52:17 AM] (rkanter) YARN-6535. Program needs to exit when SLS finishes. (yufeigu via [May 17, 2017 1:02:39 AM] (rkanter) YARN-6447. Provide container sandbox policies for groups (gphillips via [May 17, 2017 2:51:04 AM] (arp) HDFS-11827. NPE is thrown when log level changed in [May 17, 2017 2:57:45 AM] (rohithsharmaks) HADOOP-14412. HostsFileReader#getHostDetails is very expensive on large [May 17, 2017 11:35:41 AM] (aajisaka) HADOOP-14419. Remove findbugs report from docs profile. Contributed by [May 17, 2017 11:50:29 AM] (aajisaka) MAPREDUCE-6459. Native task crashes when merging spilled file on ppc64. -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverControllerStress hadoop.test.TestLambdaTestUtils hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.server.namenode.ha.TestBootstrapStandby hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapred.TestShuffleHandler hadoop.yarn.sls.TestSLSRunner hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.yarn.client.api.impl.TestNMClient hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.TestDiskFailures hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore Timed out junit tests : org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/317/artifact/out/patch-mvninstall-root.txt [492K] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/317/artifact/out/patch-compile-root.txt [20K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/317/artifact/out/patch-compile-root.txt [20K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/317/artifact/out/patch-compile-root.txt [20K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/317/artifact/out/patch-unit-hadoop-assemblies.txt [4.0K]
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/406/ [May 16, 2017 2:26:44 PM] (jlowe) YARN-6603. NPE in RMAppsBlock. Contributed by Jason Lowe [May 16, 2017 3:52:55 PM] (aajisaka) HDFS-11833. HDFS architecture documentation descibes outdated placement [May 16, 2017 4:28:46 PM] (kihwal) HDFS-11641. Reduce cost of audit logging by using FileStatus instead of [May 16, 2017 4:41:59 PM] (aajisaka) HDFS-11696. Fix warnings from Spotbugs in hadoop-hdfs. Contributed by [May 16, 2017 5:48:46 PM] (jianhe) YARN-6306. NMClient API change for container upgrade. Contributed by [May 16, 2017 6:22:32 PM] (liuml07) HADOOP-14416. Path starting with 'wasb:///' not resolved correctly while [May 17, 2017 12:52:17 AM] (rkanter) YARN-6535. Program needs to exit when SLS finishes. (yufeigu via [May 17, 2017 1:02:39 AM] (rkanter) YARN-6447. Provide container sandbox policies for groups (gphillips via [May 17, 2017 2:51:04 AM] (arp) HDFS-11827. NPE is thrown when log level changed in [May 17, 2017 2:57:45 AM] (rohithsharmaks) HADOOP-14412. HostsFileReader#getHostDetails is very expensive on large -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 368] FindBugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] FindBugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 387] Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) ignored, but method has no side effect At FTPFileSystem.java:but method has no side effect At FTPFileSystem.java:[line 421] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:[line 350] org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet iterator instead of entrySet
[jira] [Created] (HDFS-11840) Log HDFS Mover exception message of exit to its own log
LiXin Ge created HDFS-11840: --- Summary: Log HDFS Mover exception message of exit to its own log Key: HDFS-11840 URL: https://issues.apache.org/jira/browse/HDFS-11840 Project: Hadoop HDFS Issue Type: Improvement Components: balancer & mover Affects Versions: 3.0.0-alpha2 Reporter: LiXin Ge Assignee: LiXin Ge Priority: Minor Fix For: 3.0.0-alpha2 Currently, the exception message of why mover exit is logged to stderr. It is hard to figure out why Mover was aborted as we may lose the console message, but it would be much better if we also log this to Mover log. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11839) Backport HDFS-9726 to branch-2.7: Refactor IBR code to a new class
Vinitha Reddy Gankidi created HDFS-11839: Summary: Backport HDFS-9726 to branch-2.7: Refactor IBR code to a new class Key: HDFS-11839 URL: https://issues.apache.org/jira/browse/HDFS-11839 Project: Hadoop HDFS Issue Type: Improvement Reporter: Vinitha Reddy Gankidi Assignee: Vinitha Reddy Gankidi Priority: Minor As per discussussion in [mailling list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser] backport HDFS-9726 to branch-2.7 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11838) Backport HDFS-7990 to branch-2.7: IBR delete ack should not be delayed
Vinitha Reddy Gankidi created HDFS-11838: Summary: Backport HDFS-7990 to branch-2.7: IBR delete ack should not be delayed Key: HDFS-11838 URL: https://issues.apache.org/jira/browse/HDFS-11838 Project: Hadoop HDFS Issue Type: Bug Reporter: Vinitha Reddy Gankidi Assignee: Vinitha Reddy Gankidi As per discussussion in [mailling list|http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201705.mbox/browser] backport HDFS-7990 to branch-2.7. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11837) Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in batches
Vinitha Reddy Gankidi created HDFS-11837: Summary: Backport HDFS-9710 to branch-2.7: Change DN to send block receipt IBRs in batches Key: HDFS-11837 URL: https://issues.apache.org/jira/browse/HDFS-11837 Project: Hadoop HDFS Issue Type: Improvement Reporter: Vinitha Reddy Gankidi Assignee: Vinitha Reddy Gankidi -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org