[
https://issues.apache.org/jira/browse/YARN-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15158680#comment-15158680
]
Steve Loughran commented on YARN-4705:
--------------------------------------
That's what confuses me. After a scan of an empty file/failed parse, it gets
loaded again, next scan round? Or is it removed from the scan list?. Really a
failure to parse the JSON or an empty file should be treated the same: try
later if the file size increases (I found that to be a better metric when
dealing with cached filesystem data in the spark history server). If the FS is
buffering after a flush, then it may only save the last block, so the JSON
won't parse fully at first —which is why that needs to retry too
> ATS 1.5 parse pipeline to consider handling open() events recoverably
> ---------------------------------------------------------------------
>
> Key: YARN-4705
> URL: https://issues.apache.org/jira/browse/YARN-4705
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: timelineserver
> Affects Versions: 2.8.0
> Reporter: Steve Loughran
> Priority: Minor
>
> During one of my own timeline test runs, I've been seeing a stack trace
> warning that the CRC check failed in Filesystem.open() file; something the FS
> was ignoring.
> Even though its swallowed (and probably not the cause of my test failure),
> looking at the code in {{LogInfo.parsePath()}} that it considers a failure to
> open a file as unrecoverable.
> on some filesystems, this may not be the case, i.e. if its open for writing
> it may not be available for reading; checksums maybe a similar issue.
> Perhaps a failure at open() should be viewed as recoverable while the app is
> still running?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)