SparkQA commented on pull request #32031: URL: https://github.com/apache/spark/pull/32031#issuecomment-826065128
**[Test build #137877 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/137877/testReport)** for PR 32031 at commit [`10945f2`](https://github.com/apache/spark/commit/10945f2d845b9f044acd65d40a278ced51acc2b9). * This patch passes all tests. * This patch merges cleanly. * This patch adds the following public classes _(experimental)_: * `class MiscellaneousProcessDetails(` * `case class SparkListenerMiscellaneousProcessAdded(time: Long, processId: String,` * ` case class MiscellaneousProcessAdded(` * `class ImmutableBitSet(val numBits: Int, val bitsToSet: Int*) extends BitSet(numBits) ` * `public class VectorizedBLAS extends F2jBLAS ` * `class PandasOnSparkFrameMethods(object):` * `class PandasOnSparkSeriesMethods(object):` * `class IndexOpsMixin(object, metaclass=ABCMeta):` * `class CategoricalAccessor(object):` * ` however, expected types are [(<class 'float'>, <class 'int'>)].` * `class OptionError(AttributeError, KeyError):` * `class DatetimeMethods(object):` * `class DataError(Exception):` * `class SparkPandasIndexingError(Exception):` * `class SparkPandasNotImplementedError(NotImplementedError):` * `class PandasNotImplementedError(NotImplementedError):` * ` new_class = type(\"NameType\", (NameTypeHolder,), ` * ` new_class = type(\"NameType\", (NameTypeHolder,), ` * `class DataFrame(Frame, Generic[T]):` * ` [defaultdict(<class 'list'>, ` * `defaultdict(<class 'list'>, ` * `class CachedDataFrame(DataFrame):` * `class Frame(object, metaclass=ABCMeta):` * `class GroupBy(object, metaclass=ABCMeta):` * `class DataFrameGroupBy(GroupBy):` * `class SeriesGroupBy(GroupBy):` * `class Index(IndexOpsMixin):` * `class CategoricalIndex(Index):` * `class DatetimeIndex(Index):` * `class MultiIndex(Index):` * ` a single :class:`Index` (or subclass thereof).` * `class NumericIndex(Index):` * `class IntegerIndex(NumericIndex):` * `class Int64Index(IntegerIndex):` * `class Float64Index(NumericIndex):` * `class IndexerLike(object):` * `class AtIndexer(IndexerLike):` * `class iAtIndexer(IndexerLike):` * `class LocIndexerLike(IndexerLike, metaclass=ABCMeta):` * `class LocIndexer(LocIndexerLike):` * `class iLocIndexer(LocIndexerLike):` * `class InternalFrame(object):` * `class _MissingPandasLikeDataFrame(object):` * `class MissingPandasLikeDataFrameGroupBy(object):` * `class MissingPandasLikeSeriesGroupBy(object):` * `class MissingPandasLikeIndex(object):` * `class MissingPandasLikeDatetimeIndex(MissingPandasLikeIndex):` * `class MissingPandasLikeCategoricalIndex(MissingPandasLikeIndex):` * `class MissingPandasLikeMultiIndex(object):` * `class MissingPandasLikeSeries(object):` * `class MissingPandasLikeExpanding(object):` * `class MissingPandasLikeRolling(object):` * `class MissingPandasLikeExpandingGroupby(object):` * `class MissingPandasLikeRollingGroupby(object):` * `class PythonModelWrapper(object):` * `class PandasOnSparkPlotAccessor(PandasObject):` * `class PandasOnSparkBarPlot(PandasBarPlot, TopNPlotBase):` * `class PandasOnSparkBoxPlot(PandasBoxPlot, BoxPlotBase):` * `class PandasOnSparkHistPlot(PandasHistPlot, HistogramPlotBase):` * `class PandasOnSparkPiePlot(PandasPiePlot, TopNPlotBase):` * `class PandasOnSparkAreaPlot(PandasAreaPlot, SampledPlotBase):` * `class PandasOnSparkLinePlot(PandasLinePlot, SampledPlotBase):` * `class PandasOnSparkBarhPlot(PandasBarhPlot, TopNPlotBase):` * `class PandasOnSparkScatterPlot(PandasScatterPlot, TopNPlotBase):` * `class PandasOnSparkKdePlot(PandasKdePlot, KdePlotBase):` * ` new_class = type(\"NameType\", (NameTypeHolder,), ` * ` new_class = param.type if isinstance(param, np.dtype) else param` * `class Series(Frame, IndexOpsMixin, Generic[T]):` * ` dictionary is a ``dict`` subclass that defines ``__missing__`` (i.e.` * ` defaultdict(<class 'list'>, ` * `class SparkIndexOpsMethods(object, metaclass=ABCMeta):` * `class SparkSeriesMethods(SparkIndexOpsMethods):` * `class SparkIndexMethods(SparkIndexOpsMethods):` * `class SparkFrameMethods(object):` * `class CachedSparkFrameMethods(SparkFrameMethods):` * `class SQLProcessor(object):` * `class StringMethods(object):` * `class SeriesType(Generic[T]):` * `class DataFrameType(object):` * `class ScalarType(object):` * `class UnknownType(object):` * `class NameTypeHolder(object):` * ` The returned type class indicates both dtypes (a pandas only dtype object` * `class PandasOnSparkUsageLogger(object):` * `class RollingAndExpanding(object):` * `class Rolling(RollingAndExpanding):` * `class RollingGroupby(Rolling):` * `class Expanding(RollingAndExpanding):` * `class ExpandingGroupby(Expanding):` * `trait FunctionRegistryBase[T] ` * `trait SimpleFunctionRegistryBase[T] extends FunctionRegistryBase[T] with Logging ` * `trait EmptyFunctionRegistryBase[T] extends FunctionRegistryBase[T] ` * `trait FunctionRegistry extends FunctionRegistryBase[Expression] ` * `trait TableFunctionRegistry extends FunctionRegistryBase[LogicalPlan] ` * `class NoSuchFunctionException(` * `case class ResolveTableValuedFunctions(catalog: SessionCatalog) extends Rule[LogicalPlan] ` * ` case class CombinedTypeCoercionRule(rules: Seq[TypeCoercionRule]) extends TypeCoercionRule ` * `abstract class QuaternaryExpression extends Expression with QuaternaryLike[Expression] ` * `abstract class Covariance(val left: Expression, val right: Expression, nullOnDivideByZero: Boolean)` * `case class KnownFloatingPointNormalized(child: Expression) extends TaggingExpression ` * `trait BaseGroupingSets extends Expression with CodegenFallback ` * `case class Cube(` * `trait SimpleHigherOrderFunction extends HigherOrderFunction with BinaryLike[Expression] ` * `case class Acos(child: Expression) extends UnaryMathExpression(math.acos, \"ACOS\") ` * `case class Asin(child: Expression) extends UnaryMathExpression(math.asin, \"ASIN\") ` * `case class Atan(child: Expression) extends UnaryMathExpression(math.atan, \"ATAN\") ` * `case class Cbrt(child: Expression) extends UnaryMathExpression(math.cbrt, \"CBRT\") ` * `case class Cos(child: Expression) extends UnaryMathExpression(math.cos, \"COS\") ` * `case class Cosh(child: Expression) extends UnaryMathExpression(math.cosh, \"COSH\") ` * `case class Log10(child: Expression) extends UnaryLogExpression(StrictMath.log10, \"LOG10\") ` * `case class Signum(child: Expression) extends UnaryMathExpression(math.signum, \"SIGNUM\") ` * `case class Sin(child: Expression) extends UnaryMathExpression(math.sin, \"SIN\") ` * `case class Sinh(child: Expression) extends UnaryMathExpression(math.sinh, \"SINH\") ` * `case class Sqrt(child: Expression) extends UnaryMathExpression(math.sqrt, \"SQRT\") ` * `case class Tan(child: Expression) extends UnaryMathExpression(math.tan, \"TAN\") ` * `case class Tanh(child: Expression) extends UnaryMathExpression(math.tanh, \"TANH\") ` * `trait AnalysisOnlyCommand extends Command ` * `case class DomainJoin(domainAttrs: Seq[Attribute], child: LogicalPlan) extends UnaryNode ` * `case class DeleteAction(condition: Option[Expression]) extends MergeAction ` * `case class UpdateStarAction(condition: Option[Expression]) extends MergeAction ` * `case class InsertStarAction(condition: Option[Expression]) extends MergeAction ` * `case class RefreshTable(child: LogicalPlan) extends UnaryCommand ` * `case class CommentOnNamespace(child: LogicalPlan, comment: String) extends UnaryCommand ` * `case class CommentOnTable(child: LogicalPlan, comment: String) extends UnaryCommand ` * `case class RefreshFunction(child: LogicalPlan) extends UnaryCommand ` * `case class DescribeFunction(child: LogicalPlan, isExtended: Boolean) extends UnaryCommand ` * `case class RecoverPartitions(child: LogicalPlan) extends UnaryCommand ` * `case class RuleId(id: Int) ` * `abstract class TreeNode[BaseType <: TreeNode[BaseType]] extends Product with TreePatternBits ` * `trait QuaternaryLike[T <: TreeNode[T]] ` * `trait TreePatternBits ` * ` implicit class MetadataColumnHelper(attr: Attribute) ` * `public class OrcArrayColumnVector extends OrcColumnVector ` * `public class OrcAtomicColumnVector extends OrcColumnVector ` * `public abstract class OrcColumnVector extends org.apache.spark.sql.vectorized.ColumnVector ` * `class OrcColumnVectorUtils ` * `public class OrcMapColumnVector extends OrcColumnVector ` * `public class OrcStructColumnVector extends OrcColumnVector ` * `trait DataWritingCommand extends UnaryCommand ` * `case class SetCommand(kv: Option[(String, Option[String])])` * `case class ResetCommand(config: Option[String]) extends LeafRunnableCommand with IgnoreCachedData ` * `trait RunnableCommand extends Command ` * `case class AddJarCommand(path: String) extends LeafRunnableCommand ` * `case class AddFileCommand(path: String) extends LeafRunnableCommand ` * `case class AddArchiveCommand(path: String) extends LeafRunnableCommand ` * `case class ListFilesCommand(files: Seq[String] = Seq.empty[String]) extends LeafRunnableCommand ` * `case class ListJarsCommand(jars: Seq[String] = Seq.empty[String]) extends LeafRunnableCommand ` * `case class ListArchivesCommand(archives: Seq[String] = Seq.empty[String])` * `abstract class DescribeCommandBase extends LeafRunnableCommand ` * `trait BaseCacheTableExec extends LeafV2CommandExec ` * `sealed trait V1FallbackWriters extends LeafV2CommandExec with SupportsV1Write ` * `case class WriteToDataSourceV2(` * `case class LocalLimitExec(limit: Int, child: SparkPlan) extends BaseLimitExec ` * `abstract class CustomSumMetric extends CustomMetric ` * `abstract class CustomAvgMetric extends CustomMetric ` * `case class WriteToMicroBatchDataSource(` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
