Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/20937
**[Test build #89938 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/89938/testReport)**
for PR 20937 at commit
[`e0cebf4`](https://github.com/apache/spark/commit/e0cebf4aa8bdec4d27ad9cd8d4296ebbb8ed9269).
* This patch passes all tests.
* This patch merges cleanly.
* This patch adds the following public classes _(experimental)_:
* `class HasCollectSubModels(Params):`
* `class Summarizer(object):`
* `class SummaryBuilder(JavaWrapper):`
* `class CrossValidator(Estimator, ValidatorParams, HasParallelism,
HasCollectSubModels,`
* `class TrainValidationSplit(Estimator, ValidatorParams, HasParallelism,
HasCollectSubModels,`
* `case class Reverse(child: Expression) extends UnaryExpression with
ImplicitCastInputTypes `
* `case class ArrayJoin(`
* `case class ArrayMin(child: Expression) extends UnaryExpression with
ImplicitCastInputTypes `
* `case class ArrayMax(child: Expression) extends UnaryExpression with
ImplicitCastInputTypes `
* `case class ArrayPosition(left: Expression, right: Expression)`
* `case class ElementAt(left: Expression, right: Expression) extends
GetMapValueUtil `
* `case class Concat(children: Seq[Expression]) extends Expression `
* `case class Flatten(child: Expression) extends UnaryExpression `
* `abstract class GetMapValueUtil extends BinaryExpression with
ImplicitCastInputTypes `
* `case class GetMapValue(child: Expression, key: Expression)`
* `case class MonthsBetween(`
* `trait QueryPlanConstraints extends ConstraintHelper `
* `trait ConstraintHelper `
* `class ArrayDataIndexedSeq[T](arrayData: ArrayData, dataType: DataType)
extends IndexedSeq[T] `
* ` .doc(\"The class used to write checkpoint files atomically. This
class must be a subclass \" +`
* `case class CachedRDDBuilder(`
* `case class InMemoryRelation(`
* `trait CheckpointFileManager `
* ` sealed trait RenameHelperMethods `
* ` abstract class CancellableFSDataOutputStream(protected val
underlyingStream: OutputStream)`
* ` sealed class RenameBasedFSDataOutputStream(`
* `class FileSystemBasedCheckpointFileManager(path: Path, hadoopConf:
Configuration)`
* `class FileContextBasedCheckpointFileManager(path: Path, hadoopConf:
Configuration)`
* `case class WriteToContinuousDataSource(`
* `case class WriteToContinuousDataSourceExec(writer: StreamWriter,
query: SparkPlan)`
* `abstract class MemoryStreamBase[A : Encoder](sqlContext: SQLContext)
extends BaseStreamingSource `
* `class ContinuousMemoryStream[A : Encoder](id: Int, sqlContext:
SQLContext)`
* ` case class GetRecord(offset: ContinuousMemoryStreamPartitionOffset)`
* `class ContinuousMemoryStreamDataReaderFactory(`
* `class ContinuousMemoryStreamDataReader(`
* `case class ContinuousMemoryStreamOffset(partitionNums: Map[Int, Int])`
* `case class ContinuousMemoryStreamPartitionOffset(partition: Int,
numProcessed: Int)`
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]