[ https://issues.apache.org/jira/browse/MAPREDUCE-7341?focusedWorklogId=736750&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736750 ]
ASF GitHub Bot logged work on MAPREDUCE-7341: --------------------------------------------- Author: ASF GitHub Bot Created on: 04/Mar/22 16:00 Start Date: 04/Mar/22 16:00 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#discussion_r819694675 ########## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/ManifestCommitterConstants.java ########## @@ -0,0 +1,282 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.mapreduce.lib.output.committer.manifest; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; + +/** + * Public constants for the manifest committer. + * This includes all configuration options and their default values. + */ +@InterfaceAudience.Public +@InterfaceStability.Unstable +public final class ManifestCommitterConstants { + + /** + * Suffix to use in manifest files in the job attempt dir. + * Value: {@value}. + */ + public static final String MANIFEST_SUFFIX = "-manifest.json"; + + /** + * Prefix for summary files in the report dir. Call + */ + public static final String SUMMARY_FILENAME_PREFIX = "summary-"; + + /** + * Format string used to build a summary file from a Job ID. + */ + public static final String SUMMARY_FILENAME_FORMAT = + SUMMARY_FILENAME_PREFIX + "%s.json"; + + /** + * Suffix to use for temp files before renaming them. + * Value: {@value}. + */ + public static final String TMP_SUFFIX = ".tmp"; + + /** + * Initial number of all app attempts. + * This is fixed in YARN; for Spark jobs the + * same number "0" is used. + */ + public static final int INITIAL_APP_ATTEMPT_ID = 0; + + /** + * Format string for building a job dir. + * Value: {@value}. + */ + public static final String JOB_DIR_FORMAT_STR = "manifest_%s"; + + /** + * Format string for building a job attempt dir. + * This uses the job attempt number so previous versions + * can be found trivially. + * Value: {@value}. + */ + public static final String JOB_ATTEMPT_DIR_FORMAT_STR = "%d"; + + /** + * Name of directory under job attempt dir for manifests. + */ + public static final String JOB_TASK_MANIFEST_SUBDIR = "manifests"; + + /** + * Name of directory under job attempt dir for task attempts. + */ + public static final String JOB_TASK_ATTEMPT_SUBDIR = "tasks"; + + + /** + * Committer classname as recorded in the committer _SUCCESS file. + */ + public static final String MANIFEST_COMMITTER_CLASSNAME = + "org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitter"; + + /** + * Marker file to create on success: {@value}. + */ + public static final String SUCCESS_MARKER = "_SUCCESS"; + + /** Default job marker option: {@value}. */ + public static final boolean DEFAULT_CREATE_SUCCESSFUL_JOB_DIR_MARKER = true; + + /** + * The limit to the number of committed objects tracked during + * job commits and saved to the _SUCCESS file. + * Value: {@value}. + */ + public static final int SUCCESS_MARKER_FILE_LIMIT = 100; + + /** + * The UUID for jobs: {@value}. + * This was historically created in Spark 1.x's SQL queries, + * but "went away". + * It has been restored in recent spark releases. + * If found: it is used instead of the MR job attempt ID. + */ + public static final String SPARK_WRITE_UUID = "spark.sql.sources.writeJobUUID"; + + /** + * String to use as source of the job ID. + * This SHOULD be kept in sync with that of + * {@code AbstractS3ACommitter.JobUUIDSource}. + * Value: {@value}. + */ + public static final String JOB_ID_SOURCE_MAPREDUCE = "JobID"; + + /** + * Prefix to use for config options: {@value}. + */ + public static final String OPT_PREFIX = "mapreduce.manifest.committer."; + + /** + * Rather than delete in cleanup, should the working directory + * be moved to the trash directory? + * Potentially faster on some stores. + * Value: {@value}. + */ + public static final String OPT_CLEANUP_MOVE_TO_TRASH = Review comment: problem here is that abfs delete when you use oauth can take so long that the fs operation times out. something to do with directory permission checks all the way up the tree. see [HADOOP-17691](https://issues.apache.org/jira/browse/HADOOP-17691) Abfs directory delete times out on large directory tree w/ Oauth: OperationTimedOut supporting trash means if this problem surfaces we have a fallback. note, because we default to deleting each task dir independently (parallelized) risk of this is lower that it has been on those FileOutputCommitter jobs -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 736750) Time Spent: 23.5h (was: 23h 20m) > Add a task-manifest output committer for Azure and GCS > ------------------------------------------------------ > > Key: MAPREDUCE-7341 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7341 > Project: Hadoop Map/Reduce > Issue Type: New Feature > Components: client > Affects Versions: 3.3.1 > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Major > Labels: pull-request-available > Time Spent: 23.5h > Remaining Estimate: 0h > > Add a task-manifest output committer for Azure and GCS > The S3A committers are very popular in Spark on S3, as they are both correct > and fast. > The classic FileOutputCommitter v1 and v2 algorithms are all that is > available for Azure ABFS and Google GCS, and they have limitations. > The v2 algorithm isn't safe in the presence of failed task attempt commits, > so we > recommend the v1 algorithm for Azure. But that is slow because it > sequentially lists > then renames files and directories, one-by-one. The latencies of list > and rename make things slow. > Google GCS lacks the atomic directory rename required for v1 correctness; > v2 can be used (which doesn't have the job commit performance limitations), > but it's not safe. > Proposed > * Add a new FileOutputFormat committer which uses an intermediate manifest to > pass the list of files created by a TA to the job committer. > * Job committer to parallelise reading these task manifests and submit all the > rename operations into a pool of worker threads. (also: mkdir, directory > deletions on cleanup) > * Use the committer plugin mechanism added for s3a to make this the default > committer for ABFS > (i.e. no need to make any changes to FileOutputCommitter) > * Add lots of IOStatistics instrumentation + logging of operations in the > JobCommit > for visibility of where delays are occurring. > * Reuse the S3A committer _SUCCESS JSON structure to publish IOStats & other > data > for testing/support. > This committer will be faster than the V1 algorithm because of the > parallelisation, and > because a manifest written by create-and-rename will be exclusive to a single > task > attempt, delivers the isolation which the v2 committer lacks. > This is not an attempt to do an iceberg/hudi/delta-lake style manifest-only > format > for describing the contents of a table; the final output is still a directory > tree > which must be scanned during query planning. > As such the format is still suboptimal for cloud storage -but at least we > will have > faster job execution during the commit phases. > > Note: this will also work on HDFS, where again, it should be faster than > the v1 committer. However the target is very much Spark with ABFS and GCS; no > plans to worry about MR as that simplifies the challenge of dealing with job > restart (i.e. you don't have to) -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org