We know that HDFS employs a single-writer, multiple-reader model, which
means that there could be only one process writing to a file at the same
time, but multiple readers can also work in parallel and new readers can
even observe the new content. The reason for this design is to simplify
concurrency control. But, is it necessary to support reading during
writing? Can anyone bring up some use cases? Why not just lock the whole
file like other posix file systems (in terms of locking granularity)?

Reply via email to