Victsm commented on a change in pull request #29855: URL: https://github.com/apache/spark/pull/29855#discussion_r502932243
########## File path: common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ErrorHandler.java ########## @@ -0,0 +1,92 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.network.shuffle; + +import java.net.ConnectException; + +/** + * Plugs into {@link RetryingBlockFetcher} to further control when an exception should be retried + * and logged. + * Note: {@link RetryingBlockFetcher} will delegate the exception to this handler only when + * - remaining retries < max retries + * - exception is an IOException + */ + +public interface ErrorHandler { + + boolean shouldRetryError(Throwable t); + + default boolean shouldLogError(Throwable t) { + return true; + } + + /** + * A no-op error handler instance. + */ + ErrorHandler NOOP_ERROR_HANDLER = t -> true; + + /** + * The error handler for pushing shuffle blocks to remote shuffle services. + */ + class BlockPushErrorHandler implements ErrorHandler { + /** + * String constant used for generating exception messages indicating a block to be merged + * arrives too late on the server side, and also for later checking such exceptions on the + * client side. When we get a block push failure because of the block arrives too late, we + * will not retry pushing the block nor log the exception on the client side. + */ + public static final String TOO_LATE_MESSAGE_SUFFIX = + "received after merged shuffle is finalized"; + + /** + * String constant used for generating exception messages indicating the server couldn't + * append a block after all available attempts due to collision with other blocks belonging Review comment: I don't quite get it. Here collision refers to multiple blocks belonging to the same shuffle partition getting pushed at the same time to the same shuffle service. Since the shuffle service needs to completely append one block before handling the other, we get a collision. Right now, all blocks belonging to one shuffle partition get pushed to the same shuffle service, so there won't be collisions between different servers. Even if we potentially allow multiple shuffle services to handle one shuffle partition in the future, collision is still something that can happen on one server, not between servers. Did you mean block duplication instead, i.e., the same block getting pushed multiple times? For now, with one shuffle service always handling one shuffle partition, it can be properly handled. If we allow multiple shuffle services to handle one partition down the way, Spark driver should divide blocks based on disjoint map Id subranges so that each shuffle service will only handle a disjoint subset of blocks for a given shuffle partition. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
