LiJie20190102 commented on PR #14042:
URL: 
https://github.com/apache/dolphinscheduler/pull/14042#issuecomment-1543285917

   @Radeity 
   --“First, it's ineffective when users do not use DS api to upload and 
download resources.”
   If there are many executing tasks reading the same resource from DS at the 
same time (i.e. downloading), and the DS web interface uploads the resource 
again, there is a possibility of problems occurring. I think this situation 
should be very common
   
   --“Second, i think this error happens because when executing 
fs.copyFromLocalFile, resource will be overwritten, old block will be set 
invalid or removed, and then throw ReplicaNotFoundException. For HDFS, maybe we 
can try to download again when handling that exception.”
   If many scheduled tasks are reading the same resource file and you update 
the file resource, I hope the scheduled task can obtain new or old resource 
files. At the very least, the task will not have any exceptions, rather than 
throwing exceptions to say that the file has an exception.
   
   BTW,I currently only implement HDFS's distributed lock function. If other 
resource storage does not have this issue, resource locks can be ignored.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to