SparkQA commented on issue #27549: [SPARK-30803][DOCS] Fix the home page link 
for Scala API document
URL: https://github.com/apache/spark/pull/27549#issuecomment-586600467
 
 
   **[Test build #118479 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/118479/testReport)**
 for PR 27549 at commit 
[`2a5a571`](https://github.com/apache/spark/commit/2a5a57178abcf69942af3a89c8f09a5600ebb6fe).
    * This patch passes all tests.
    * This patch merges cleanly.
    * This patch adds the following public classes _(experimental)_:
     * 
`[Tokenization](http://en.wikipedia.org/wiki/Lexical_analysis#Tokenization) is 
the process of taking text (such as a sentence) and breaking it into individual 
terms (usually words).  A simple 
[Tokenizer](api/scala/org/apache/spark/ml/feature/Tokenizer.html) class 
provides this functionality.  The example below shows how to split sentences 
into sequences of words.`
     * `[PCA](http://en.wikipedia.org/wiki/Principal_component_analysis) is a 
statistical procedure that uses an orthogonal transformation to convert a set 
of observations of possibly correlated variables into a set of values of 
linearly uncorrelated variables called principal components. A 
[PCA](api/scala/org/apache/spark/ml/feature/PCA.html) class trains a model to 
project vectors to a low-dimensional space using PCA. The example below shows 
how to project 5-dimensional feature vectors into 3-dimensional principal 
components.`
     * `[Polynomial 
expansion](http://en.wikipedia.org/wiki/Polynomial_expansion) is the process of 
expanding your features into a polynomial space, which is formulated by an 
n-degree combination of original dimensions. A 
[PolynomialExpansion](api/scala/org/apache/spark/ml/feature/PolynomialExpansion.html)
 class provides this functionality.  The example below shows how to expand your 
features into a 3-degree polynomial space.`
     * `    * *(Breaking change)* The `apply` and `copy` methods for the case 
class 
[`BoostingStrategy`](api/scala/org/apache/spark/mllib/tree/configuration/BoostingStrategy.html)
 have been changed because of a modification to the case class fields.  This 
could be an issue for users who use `BoostingStrategy` to set GBT parameters.`
     * `* *(Breaking change)* The return value of 
[`LDA.run`](api/scala/org/apache/spark/mllib/clustering/LDA.html) has changed.  
It now returns an abstract class `LDAModel` instead of the concrete class 
`DistributedLDAModel`.  The object of type `LDAModel` can still be cast to the 
appropriate concrete type, which depends on the optimization algorithm.`
     * `A [Converter](api/scala/org/apache/spark/api/python/Converter.html) 
trait is provided`
     * `In Scala, you have to extend the class `ForeachWriter` 
([docs](api/scala/org/apache/spark/sql/ForeachWriter.html)).`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to