LuciferYang commented on code in PR #49531:
URL: https://github.com/apache/spark/pull/49531#discussion_r1924639668
##########
core/src/test/scala/org/apache/spark/network/netty/NettyBlockTransferServiceSuite.scala:
##########
@@ -130,6 +132,62 @@ class NettyBlockTransferServiceSuite
assert(hitExecutorDeadException)
}
+ test("SPARK-50853 - example of simple download file writable channel not
being closed") {
Review Comment:
Personally, I think this test case is a bit heavy. According to the pr
description, perhaps the new case just needs to ensure that after calling the
`closeAndRead()` method on `SimpleDownloadWritableChannel`, the `isOpen()`
status should return false (Before this PR, this assertion would fail.).
For example, maybe we could add a very simple case in the network-shuffle
module:
```java
File tempFile = File.createTempFile("test", ".tmp");
tempFile.deleteOnExit();
Map<String, String> confMap = new HashMap<>();
// Perhaps some additional configuration options need to be put into
the confMap.
TransportConf shuffle = new TransportConf("shuffle", new
MapConfigProvider(confMap));
DownloadFile downloadFile = null;
try {
downloadFile = new SimpleDownloadFile(tempFile, shuffle);
DownloadFileWritableChannel channel =
downloadFile.openForWriting();
// ...
// ... Perhaps some other operations are needed.
channel.closeAndRead();
Assertions.assertFalse(channel.isOpen(), "Channel should be
closed after closeAndRead.");
// ... Perhaps some additional assertions need to be added.
} finally {
if (downloadFile != null) {
downloadFile.delete();
}
}
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]