ChenSammi commented on code in PR #7221:
URL: https://github.com/apache/ozone/pull/7221#discussion_r1767913691
##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/MultipartInputStream.java:
##########
@@ -173,6 +178,28 @@ public synchronized void seek(long pos) throws IOException
{
prevPartIndex = partIndex;
}
+ public synchronized void initialize() throws IOException {
Review Comment:
@jojochuang , the problem we are now facing is how to make sure a reader of
an open file will succeed. There is two things left last time when I
investigated this issue,
a. how to handle if the writer of the open file is slower than the reader of
the open file. Say block A, it's block size in OM is 10 bytes, when the reader
starts, the reader fetches the block A length from DN which is 80 bytes, and
later, the writer writes more data, then length becomes 90 bytes, should the
reader refetch the block size from DN again in this case?
b. what if there is a new block allocated by the writer?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]