SparkQA commented on pull request #28647: URL: https://github.com/apache/spark/pull/28647#issuecomment-728757580
**[Test build #131201 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/131201/testReport)** for PR 28647 at commit [`4b55575`](https://github.com/apache/spark/commit/4b555750488a5c5c77077dbb0aa98514eb04b03f). * This patch **fails due to an unknown error code, -9**. * This patch merges cleanly. * This patch adds the following public classes _(experimental)_: * ` public static class NoOpMergedShuffleFileManager implements MergedShuffleFileManager ` * `public class RemoteBlockPushResolver implements MergedShuffleFileManager ` * ` static class PushBlockStreamCallback implements StreamCallbackWithID ` * ` public static class AppShuffleId ` * ` public static class AppShufflePartitionInfo ` * `trait HasMaxBlockSizeInMB extends Params ` * ` >>> class VectorAccumulatorParam(AccumulatorParam):` * ` fully qualified classname of key Writable class (e.g. \"org.apache.hadoop.io.Text\")` * ` fully qualified classname of key Writable class (e.g. \"org.apache.hadoop.io.Text\")` * ` fully qualified classname of key Writable class (e.g. \"org.apache.hadoop.io.Text\")` * ` fully qualified classname of key Writable class (e.g. \"org.apache.hadoop.io.Text\")` * `class HasMaxBlockSizeInMB(Params):` * `trait SQLConfHelper ` * `class Analyzer(override val catalogManager: CatalogManager)` * `case class UnresolvedTableOrView(` * `case class UnresolvedPartitionSpec(` * `case class ResolvedPartitionSpec(` * `case class ElementAt(` * `case class GetArrayItem(` * `case class GetMapValue(` * `case class Elt(` * `trait OffsetWindowFunction extends WindowFunction ` * `class AstBuilder extends SqlBaseBaseVisitor[AnyRef] with SQLConfHelper with Logging ` * `abstract class AbstractSqlParser extends ParserInterface with SQLConfHelper with Logging ` * `class CatalystSqlParser extends AbstractSqlParser ` * `case class AnalyzeTable(` * `case class AnalyzeColumn(` * `case class AlterTableAddPartition(` * `case class AlterTableDropPartition(` * `case class LoadData(` * `case class ShowCreateTable(child: LogicalPlan, asSerde: Boolean = false) extends Command ` * `abstract class Rule[TreeType <: TreeNode[_]] extends SQLConfHelper with Logging ` * ` implicit class PartitionSpecsHelper(partSpecs: Seq[PartitionSpec]) ` * `class SparkPlanner(val session: SparkSession, val experimentalMethods: ExperimentalMethods)` * `class SparkSqlParser extends AbstractSqlParser ` * `class SparkSqlAstBuilder extends AstBuilder ` * `case class CoalesceShufflePartitions(session: SparkSession) extends Rule[SparkPlan] ` * `class FindDataSourceTable(sparkSession: SparkSession) extends Rule[LogicalPlan] ` * `class FallBackFileSourceV2(sparkSession: SparkSession) extends Rule[LogicalPlan] ` * `class ResolveSQLOnFile(sparkSession: SparkSession) extends Rule[LogicalPlan] ` * `case class PreprocessTableCreation(sparkSession: SparkSession) extends Rule[LogicalPlan] ` * `case class AlterTableAddPartitionExec(` * `case class AlterTableDropPartitionExec(` * `case class DropTableExec(` * `class V2SessionCatalog(catalog: SessionCatalog)` * `case class PlanDynamicPruningFilters(sparkSession: SparkSession)` * ` class HDFSBackedReadStateStore(val version: Long, map: MapType)` * `trait ReadStateStore ` * `trait StateStore extends ReadStateStore ` * `class WrappedReadStateStore(store: StateStore) extends ReadStateStore ` * `abstract class BaseStateStoreRDD[T: ClassTag, U: ClassTag](` * `class ReadStateStoreRDD[T: ClassTag, U: ClassTag](` * `case class PlanSubqueries(sparkSession: SparkSession) extends Rule[SparkPlan] ` * `class VariableSubstitution extends SQLConfHelper ` * `abstract class JdbcDialect extends Serializable with Logging` * `class ResolveHiveSerdeTable(session: SparkSession) extends Rule[LogicalPlan] ` * `class DetermineTableStats(session: SparkSession) extends Rule[LogicalPlan] ` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
