[ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=534255&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-534255
 ]

ASF GitHub Bot logged work on HADOOP-17414:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/Jan/21 14:15
            Start Date: 11/Jan/21 14:15
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on a change in pull request 
#2530:
URL: https://github.com/apache/hadoop/pull/2530#discussion_r555075093



##########
File path: 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
##########
@@ -1873,11 +1873,9 @@
 
 <property>
   <name>fs.s3a.committer.magic.enabled</name>
-  <value>false</value>
+  <value>true</value>

Review comment:
       I' will add to release notes. This *does not* enable the committer. All 
it does is say "this store has the consistency needed for the committer". 
   
   Now, we could do a bit more in a separate patch as there's a bit more in 
settings I'd like to change (we ship in cloudera) where the fs.s3a.buffer.dir 
is set to something like `${env.LOCAL_DIR:${hadoop.tmp.dir}}/s3a` so that 
incomplete buffered writes (staging committer and blocks for normal files/magic 
uploads) are always cleaned up. I could make the core-default settings for this 
committer different




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 534255)
    Time Spent: 5h 10m  (was: 5h)

> Magic committer files don't have the count of bytes written collected by spark
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-17414
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17414
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files. Which, given they don't exist yet, isn't easy to 
> do.
> Solution: 
> * Add getXAttr and listXAttr API calls to S3AFileSystem
> * Return all S3 object headers as XAttr attributes prefixed "header." That's 
> custom and standard (e.g header.Content-Length).
> The setXAttr call isn't implemented, so for correctness the FS doesn't
> declare its support for the API in hasPathCapability().
> The magic commit file write sets the custom header 
> set the length of the data final data in the header
> x-hadoop-s3a-magic-data-length in the marker file.
> A matching patch in Spark will look for the XAttr
> "header.x-hadoop-s3a-magic-data-length" when the file
> being probed for output data is zero byte long. 
> As a result, the job tracking statistics will report the
> bytes written but yet to be manifest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to