Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/11632#issuecomment-201127188
  
    **[Test build #54134 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/54134/consoleFull)**
 for PR 11632 at commit 
[`249526b`](https://github.com/apache/spark/commit/249526b8ce54578e6f092512a473477ba8c8d67a).
     * This patch passes all tests.
     * This patch merges cleanly.
     * This patch adds the following public classes _(experimental)_:
      * `public class JavaChiSqSelectorExample `
      * `public class JavaCorrelationsExample `
      * `public class JavaElementwiseProductExample `
      * `public class JavaHypothesisTestingExample `
      * `public class JavaHypothesisTestingKolmogorovSmirnovTestExample `
      * `public class JavaKernelDensityEstimationExample `
      * `public class JavaStratifiedSamplingExample `
      * `public class JavaSummaryStatisticsExample `
      * `  class MultilayerPerceptronClassificationModelWriter(`
      * `class TypeConverters(object):`
      * `    probabilityCol = Param(Params._dummy(), \"probabilityCol\", 
\"Column name for predicted class conditional probabilities. Note: Not all 
models output well-calibrated probability estimates! These probabilities should 
be treated as confidences, not precise probabilities.\", 
typeConverter=TypeConverters.toString)`
      * `    thresholds = Param(Params._dummy(), \"thresholds\", \"Thresholds 
in multi-class classification to adjust the probability of predicting each 
class. Array must have length equal to the number of classes, with values >= 0. 
The class with largest value p/t is predicted, where p is the original 
probability of that class and t is the class' threshold.\", 
typeConverter=TypeConverters.toListFloat)`
      * `public final class XXH64 `
      * `abstract class HashExpression[E] extends Expression `
      * `abstract class InterpretedHashFunction `
      * `case class Murmur3Hash(children: Seq[Expression], seed: Int) extends 
HashExpression[Int] `
      * `case class XxHash64(children: Seq[Expression], seed: Long) extends 
HashExpression[Long] `
      * `      final class GeneratedIterator extends 
org.apache.spark.sql.execution.BufferedRowIterator `
      * `class FileStreamSink(`
      * `class StreamFileCatalog(sqlContext: SQLContext, path: Path) extends 
FileCatalog with Logging `
      * `  class HDFSBackedStateStore(val version: Long, mapToUpdate: MapType)`
      * `case class StateStoreId(checkpointLocation: String, operatorId: Long, 
partitionId: Int)`
      * `trait StateStore `
      * `trait StateStoreProvider `
      * `case class ValueAdded(key: UnsafeRow, value: UnsafeRow) extends 
StoreUpdate`
      * `case class ValueUpdated(key: UnsafeRow, value: UnsafeRow) extends 
StoreUpdate`
      * `case class KeyRemoved(key: UnsafeRow) extends StoreUpdate`
      * `class StateStoreRDD[T: ClassTag, U: ClassTag](`
      * `  implicit class StateStoreOps[T: ClassTag](dataRDD: RDD[T]) `


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to