[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7341?focusedWorklogId=740926&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-740926
 ]

ASF GitHub Bot logged work on MAPREDUCE-7341:
---------------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Mar/22 14:57
            Start Date: 14/Mar/22 14:57
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on pull request #2971:
URL: https://github.com/apache/hadoop/pull/2971#issuecomment-1066904248


   OK.
   
   I'm going to say "sorry, no" to the idea of using diff to validate JSON 
files; think a bit about dest file validation.
   
   JSON is there to be parsed, the bundled diagnostics and iostats change, and 
the file paths will between local, abfs and gcs.
   
   the way to validate it is to read it in and make assertions on it.
   
   Alongside this PR, i have a private fork of google gcs which subclasses all 
the tests and runs them against google cloud
   
   and end to end test through spark standalone
   https://github.com/hortonworks-spark/cloud-integration
   
   these tests verify the committer works for dataframe, and spark sql for 
orc/parquet and csv
   
https://github.com/hortonworks-spark/cloud-integration/blob/master/cloud-examples/src/test/scala/com/cloudera/spark/cloud/abfs/commit/AbfsCommitDataframeSuite.scala#L83
   
https://github.com/hortonworks-spark/cloud-integration/tree/master/cloud-examples/src/test/scala/org/apache/spark/sql/hive/orc/abfs
   
https://github.com/hortonworks-spark/cloud-integration/tree/master/cloud-examples/src/test/scala/org/apache/spark/sql/hive/orc/gs
   
   these tests are loading and validating the success file (and its truncated 
list of generated files) with the filesystem
   
https://github.com/hortonworks-spark/cloud-integration/blob/master/cloud-examples/src/main/scala/com/cloudera/spark/cloud/s3/S3AOperations.scala#L54
   
   this is all an evolution of the existing suites for the s3a committers 
-which is where the success file came from.
   
   I would rather do the detailed test here as they are full integration tests. 
It is fairly tricky to get them building however; takes an hour+ for a full 
compile, which needs to be repeated every morning (-SNAPSHOT artifacts, see).
   
   what i can do in the hadoop tests is add a test to load a success file and 
validate it against the output, and that there are no unknown files there.
   
   i'd love some suggestions as improvements to the spark ones too. it's a mix 
of my own and some I moved from the apache spark sql suites and reworked to be 
targetable at different filesystems. one thing i don't test there is writing 
data over existing files in a complex partition tree...i should do that, which 
i can do after this patch is in...
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 740926)
    Time Spent: 32h  (was: 31h 50m)

> Add a task-manifest output committer for Azure and GCS
> ------------------------------------------------------
>
>                 Key: MAPREDUCE-7341
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7341
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>          Components: client
>    Affects Versions: 3.3.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 32h
>  Remaining Estimate: 0h
>
> Add a task-manifest output committer for Azure and GCS
> The S3A committers are very popular in Spark on S3, as they are both correct 
> and fast.
> The classic FileOutputCommitter v1 and v2 algorithms are all that is 
> available for Azure ABFS and Google GCS, and they have limitations. 
> The v2 algorithm isn't safe in the presence of failed task attempt commits, 
> so we
> recommend the v1 algorithm for Azure. But that is slow because it 
> sequentially lists
> then renames files and directories, one-by-one. The latencies of list
> and rename make things slow.
> Google GCS lacks the atomic directory rename required for v1 correctness;
> v2 can be used (which doesn't have the job commit performance limitations),
> but it's not safe.
> Proposed
> * Add a new FileOutputFormat committer which uses an intermediate manifest to
>   pass the list of files created by a TA to the job committer.
> * Job committer to parallelise reading these task manifests and submit all the
>   rename operations into a pool of worker threads. (also: mkdir, directory 
> deletions on cleanup)
> * Use the committer plugin mechanism added for s3a to make this the default 
> committer for ABFS
>   (i.e. no need to make any changes to FileOutputCommitter)
> * Add lots of IOStatistics instrumentation + logging of operations in the 
> JobCommit
>   for visibility of where delays are occurring.
> * Reuse the S3A committer _SUCCESS JSON structure to publish IOStats & other 
> data
>   for testing/support.  
> This committer will be faster than the V1 algorithm because of the 
> parallelisation, and
> because a manifest written by create-and-rename will be exclusive to a single 
> task
> attempt, delivers the isolation which the v2 committer lacks.
> This is not an attempt to do an iceberg/hudi/delta-lake style manifest-only 
> format
> for describing the contents of a table; the final output is still a directory 
> tree
> which must be scanned during query planning.
> As such the format is still suboptimal for cloud storage -but at least we 
> will have
> faster job execution during the commit phases.
>   
> Note: this will also work on HDFS, where again, it should be faster than
> the v1 committer. However the target is very much Spark with ABFS and GCS; no 
> plans to worry about MR as that simplifies the challenge of dealing with job 
> restart (i.e. you don't have to)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org

Reply via email to