[
https://issues.apache.org/jira/browse/HADOOP-16644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16946781#comment-16946781
]
Steve Loughran commented on HADOOP-16644:
-----------------------------------------
{code}
2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN
localizer.ResourceLocalizationService
(ResourceLocalizationService.java:processHeartbeat(1150)) - {
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst,
1570531828143, FILE, null } failed: Resource
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst changed
on src filesystem (expected 1570531828143, was 1570531828000
java.io.IOException: Resource
s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst changed
on src filesystem (expected 1570531828143, was 1570531828000
at
org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:273)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:248)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:241)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:229)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}
> Intermittent failure of ITestS3ATerasortOnS3A: timestamp differences
> --------------------------------------------------------------------
>
> Key: HADOOP-16644
> URL: https://issues.apache.org/jira/browse/HADOOP-16644
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3, test
> Affects Versions: 3.3.0
> Environment: -Dparallel-tests -DtestsThreadCount=8
> -Dfailsafe.runOrder=balanced -Ds3guard -Ddynamo -Dscale
> h2. Hypothesis:
> the timestamp of the source file is being picked up from S3Guard, but when
> the NM does a getFileStatus call, a HEAD check is made -and this (due to the
> overloaded test system) is out of sync with the listing. S3Guard is updated,
> the corrected date returned and the localisation fails.
> Reporter: Steve Loughran
> Priority: Major
>
> Terasort of directory committer failing in resource localisaton -the
> partitions.lst file has a different TS from that expected
> Happens under loaded integration tests (threads = 8; not standalone);
> non-auth s3guard
> {code}
> 2019-10-08 11:50:29,774 [IPC Server handler 4 on 55983] WARN
> localizer.ResourceLocalizationService
> (ResourceLocalizationService.java:processHeartbeat(1150)) - {
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst,
> 1570531828143, FILE, null } failed: Resource
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst
> changed on src filesystem (expected 1570531828143, was 1570531828000
> java.io.IOException: Resource
> s3a://hwdev-steve-ireland-new/terasort-directory/sortout/_partition.lst
> changed on src filesystem (expected 1570531828143, was 1570531828000
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]