[ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16396618#comment-16396618
 ] 

Yiqun Lin edited comment on HDFS-12773 at 3/13/18 9:03 AM:
-----------------------------------------------------------

Hi [~elgoiri], thanks for your comments. Comments make sense to me.
{quote}In this approach, we use temporary files for the writes and renaming 
"atomically" to the final value.
 All the implementation is done in the StateStoreFileBaseImpl which instead of 
writing to the final location, it will go to a temporary one and then rename to 
the final destination.
{quote}
Why not document this in javadoc of {{StateStoreFileBaseImpl}}? This will be 
good understanding for users.
{quote}It could be fixed by checking monotonicNow() in testTempOld() but that's 
not correct because multiple Routers need to check for that timestamp.
 I'm reverting that one back to now().
{quote}
A good way should update {{now()}} to {{monotonicNow()}} both in 
{{StateStoreFileBaseImpl#putAll}} and 
{{StateStoreFileBaseImpl#isOldTempRecord}}, instead of reverting that one.

In *StateStoreFileSystemImpl.java*,
{code:java}
  protected boolean rename(String src, String dst) {
    try {
      if (fs instanceof DistributedFileSystem) {
        ...
      } else {
        // Replace should be atomic but not available
        if (fs.exists(new Path(dst))) {
          fs.delete(new Path(dst), false);
        }
        return fs.rename(new Path(src), new Path(dst));
      }
{code}
The second parameter passed in delete operation would be bettter to use 
{{true}}. That mean we will recursively delete all files under given dir. The 
same fix for {{StateStoreFileSystemImpl#remove}}


was (Author: linyiqun):
Hi [~elgoiri], thanks for your comments. Comments make sense to me.
{quote}In this approach, we use temporary files for the writes and renaming 
"atomically" to the final value.
 All the implementation is done in the StateStoreFileBaseImpl which instead of 
writing to the final location, it will go to a temporary one and then rename to 
the final destination.
{quote}
Why not document this in javadoc of {{StateStoreFileBaseImpl}}? This will be 
good understanding for users.
{quote}It could be fixed by checking monotonicNow() in testTempOld() but that's 
not correct because multiple Routers need to check for that timestamp.
 I'm reverting that one back to now().
{quote}
A good way should update {{now()}} to {{monotonicNow()}} both in 
{{StateStoreFileBaseImpl#putAll}} and 
{{StateStoreFileBaseImpl#isOldTempRecord}}, instead of reverting that one.

In *StateStoreFileSystemImpl.java*,
{code:java}
  protected boolean rename(String src, String dst) {
    try {
      if (fs instanceof DistributedFileSystem) {
        ...
      } else {
        // Replace should be atomic but not available
        if (fs.exists(new Path(dst))) {
          fs.delete(new Path(dst), false);
        }
        return fs.rename(new Path(src), new Path(dst));
      }
{code}
The second parameter passed in delete operation would be bettter to use 
{{true}}. That mean we will recursively delete all files under given dir.

> RBF: Improve State Store FS implementation
> ------------------------------------------
>
>                 Key: HDFS-12773
>                 URL: https://issues.apache.org/jira/browse/HDFS-12773
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Íñigo Goiri
>            Assignee: Íñigo Goiri
>            Priority: Major
>         Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch, 
> HDFS-12773.002.patch, HDFS-12773.003.patch, HDFS-12773.004.patch, 
> HDFS-12773.005.patch, HDFS-12773.006.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to