spark git commit: [SPARK-13590][ML][DOC] Document spark.ml LiR, LoR and AFTSurvivalRegression behavior difference

2016-06-07 Thread yliang
Repository: spark Updated Branches: refs/heads/master 890baaca5 -> 6ecedf39b [SPARK-13590][ML][DOC] Document spark.ml LiR, LoR and AFTSurvivalRegression behavior difference ## What changes were proposed in this pull request? When fitting ```LinearRegressionModel```(by "l-bfgs" solver) and

spark git commit: [SPARK-13590][ML][DOC] Document spark.ml LiR, LoR and AFTSurvivalRegression behavior difference

2016-06-07 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 9e16f23e7 -> e21a9ddef [SPARK-13590][ML][DOC] Document spark.ml LiR, LoR and AFTSurvivalRegression behavior difference ## What changes were proposed in this pull request? When fitting ```LinearRegressionModel```(by "l-bfgs" solver)

spark git commit: [SPARK-15738][PYSPARK][ML] Adding Pyspark ml RFormula __str__ method similar to Scala API

2016-06-10 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 8b6742a37 -> 80b8711b3 [SPARK-15738][PYSPARK][ML] Adding Pyspark ml RFormula __str__ method similar to Scala API ## What changes were proposed in this pull request? Adding __str__ to RFormula and model that will show the set formula

spark git commit: [SPARK-15946][MLLIB] Conversion between old/new vector columns in a DataFrame (Python)

2016-06-17 Thread yliang
Repository: spark Updated Branches: refs/heads/master af2a4b082 -> edb23f9e4 [SPARK-15946][MLLIB] Conversion between old/new vector columns in a DataFrame (Python) ## What changes were proposed in this pull request? This PR implements python wrappers for #13662 to convert old/new vector

spark git commit: [PYSPARK] add picklable SparseMatrix in pyspark.ml.common

2016-07-24 Thread yliang
Repository: spark Updated Branches: refs/heads/master cc1d2dcb6 -> 37bed97de [PYSPARK] add picklable SparseMatrix in pyspark.ml.common ## What changes were proposed in this pull request? add `SparseMatrix` class whick support pickler. ## How was this patch tested? Existing test. Author:

spark git commit: [SPARK-16558][EXAMPLES][MLLIB] examples/mllib/LDAExample should use MLVector instead of MLlib Vector

2016-08-02 Thread yliang
Repository: spark Updated Branches: refs/heads/master d9e0919d3 -> dd8514fa2 [SPARK-16558][EXAMPLES][MLLIB] examples/mllib/LDAExample should use MLVector instead of MLlib Vector ## What changes were proposed in this pull request? mllib.LDAExample uses ML pipeline and MLlib LDA algorithm.

spark git commit: [SPARK-16851][ML] Incorrect threshould length in 'setThresholds()' evoke Exception

2016-08-02 Thread yliang
Repository: spark Updated Branches: refs/heads/master a1ff72e1c -> d9e0919d3 [SPARK-16851][ML] Incorrect threshould length in 'setThresholds()' evoke Exception ## What changes were proposed in this pull request? Add a length checking for threshoulds' length in method `setThreshoulds()` of

spark git commit: [SPARK-16558][EXAMPLES][MLLIB] examples/mllib/LDAExample should use MLVector instead of MLlib Vector

2016-08-02 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 9d9956e8f -> c5516ab60 [SPARK-16558][EXAMPLES][MLLIB] examples/mllib/LDAExample should use MLVector instead of MLlib Vector ## What changes were proposed in this pull request? mllib.LDAExample uses ML pipeline and MLlib LDA

spark git commit: [MINOR][ML] Rename TreeEnsembleModels to TreeEnsembleModel for PySpark

2016-08-12 Thread yliang
Repository: spark Updated Branches: refs/heads/master ac84fb64d -> ccc6dc0f4 [MINOR][ML] Rename TreeEnsembleModels to TreeEnsembleModel for PySpark ## What changes were proposed in this pull request? Fix the typo of ```TreeEnsembleModels``` for PySpark, it should ```TreeEnsembleModel```

spark git commit: [SPARK-16242][MLLIB][PYSPARK] Conversion between old/new matrix columns in a DataFrame (Python)

2016-06-28 Thread yliang
Repository: spark Updated Branches: refs/heads/master f6b497fcd -> e158478a9 [SPARK-16242][MLLIB][PYSPARK] Conversion between old/new matrix columns in a DataFrame (Python) ## What changes were proposed in this pull request? This PR implements python wrappers for #13888 to convert old/new

spark git commit: [SPARK-16242][MLLIB][PYSPARK] Conversion between old/new matrix columns in a DataFrame (Python)

2016-06-28 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 af70ad028 -> b349237e4 [SPARK-16242][MLLIB][PYSPARK] Conversion between old/new matrix columns in a DataFrame (Python) ## What changes were proposed in this pull request? This PR implements python wrappers for #13888 to convert

spark git commit: [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading

2016-07-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master 5f342049c -> 5497242c7 [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading ## What changes were proposed in this pull request? jira: https://issues.apache.org/jira/browse/SPARK-16249 Change visibility of

spark git commit: [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading

2016-07-06 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 521fc7186 -> 25006c8bc [SPARK-16249][ML] Change visibility of Object ml.clustering.LDA to public for loading ## What changes were proposed in this pull request? jira: https://issues.apache.org/jira/browse/SPARK-16249 Change visibility

spark git commit: [SPARK-16307][ML] Add test to verify the predicted variances of a DT on toy data

2016-07-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master 7e28fabdf -> 909c6d812 [SPARK-16307][ML] Add test to verify the predicted variances of a DT on toy data ## What changes were proposed in this pull request? The current tests assumes that `impurity.calculate()` returns the variance

spark git commit: [SPARK-16933][ML] Fix AFTAggregator in AFTSurvivalRegression serializes unnecessary data.

2016-08-09 Thread yliang
Repository: spark Updated Branches: refs/heads/master 511f52f84 -> 182e11904 [SPARK-16933][ML] Fix AFTAggregator in AFTSurvivalRegression serializes unnecessary data. ## What changes were proposed in this pull request? Similar to ```LeastSquaresAggregator``` in #14109, ```AFTAggregator```

spark git commit: [SPARK-16241][ML] model loading backward compatibility for ml NaiveBayes

2016-06-30 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 c8a7c2305 -> 1d274455c [SPARK-16241][ML] model loading backward compatibility for ml NaiveBayes ## What changes were proposed in this pull request? model loading backward compatibility for ml NaiveBayes ## How was this patch tested?

spark git commit: [SPARK-16241][ML] model loading backward compatibility for ml NaiveBayes

2016-06-30 Thread yliang
Repository: spark Updated Branches: refs/heads/master 2c3d96134 -> b30a2dc7c [SPARK-16241][ML] model loading backward compatibility for ml NaiveBayes ## What changes were proposed in this pull request? model loading backward compatibility for ml NaiveBayes ## How was this patch tested?

spark git commit: [SPARK-16260][ML][EXAMPLE] PySpark ML Example Improvements and Cleanup

2016-07-04 Thread yliang
Repository: spark Updated Branches: refs/heads/master 262833397 -> a539b724c [SPARK-16260][ML][EXAMPLE] PySpark ML Example Improvements and Cleanup ## What changes were proposed in this pull request? 1). Remove unused import in Scala example; 2). Move spark session import outside example

spark git commit: [SPARK-16260][ML][EXAMPLE] PySpark ML Example Improvements and Cleanup

2016-07-04 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 0c6fd03fa -> 3ecee573c [SPARK-16260][ML][EXAMPLE] PySpark ML Example Improvements and Cleanup ## What changes were proposed in this pull request? 1). Remove unused import in Scala example; 2). Move spark session import outside example

spark git commit: [SPARK-16934][ML][MLLIB] Update LogisticCostAggregator serialization code to make it consistent with LinearRegression

2016-08-15 Thread yliang
Repository: spark Updated Branches: refs/heads/master ddf0d1e3f -> 3d8bfe7a3 [SPARK-16934][ML][MLLIB] Update LogisticCostAggregator serialization code to make it consistent with LinearRegression ## What changes were proposed in this pull request? Update LogisticCostAggregator serialization

spark git commit: [SPARK-19155][ML] Make family case insensitive in GLM

2017-01-23 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 8daf10e3f -> 1e07a7192 [SPARK-19155][ML] Make family case insensitive in GLM ## What changes were proposed in this pull request? This is a supplement to PR #16516 which did not make the value from `getFamily` case insensitive. Current

spark git commit: [SPARK-19155][ML] Make family case insensitive in GLM

2017-01-23 Thread yliang
Repository: spark Updated Branches: refs/heads/master de6ad3dfa -> f067acefa [SPARK-19155][ML] Make family case insensitive in GLM ## What changes were proposed in this pull request? This is a supplement to PR #16516 which did not make the value from `getFamily` case insensitive. Current

spark git commit: [SPARK-19313][ML][MLLIB] GaussianMixture should limit the number of features

2017-01-25 Thread yliang
Repository: spark Updated Branches: refs/heads/master 76db394f2 -> 0e821ec6f [SPARK-19313][ML][MLLIB] GaussianMixture should limit the number of features ## What changes were proposed in this pull request? The following test will fail on current master scala test("gmm fails on high

spark git commit: [SPARK-19155][ML] MLlib GeneralizedLinearRegression family and link should case insensitive

2017-01-21 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 6f0ad575d -> 8daf10e3f [SPARK-19155][ML] MLlib GeneralizedLinearRegression family and link should case insensitive ## What changes were proposed in this pull request? MLlib ```GeneralizedLinearRegression``` ```family``` and ```link```

spark git commit: [SPARK-19155][ML] MLlib GeneralizedLinearRegression family and link should case insensitive

2017-01-21 Thread yliang
Repository: spark Updated Branches: refs/heads/master aa014eb74 -> 3dcad9fab [SPARK-19155][ML] MLlib GeneralizedLinearRegression family and link should case insensitive ## What changes were proposed in this pull request? MLlib ```GeneralizedLinearRegression``` ```family``` and ```link```

spark git commit: [SPARK-19155][ML] MLlib GeneralizedLinearRegression family and link should case insensitive

2017-01-21 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 4c2065d0a -> 886f73737 [SPARK-19155][ML] MLlib GeneralizedLinearRegression family and link should case insensitive ## What changes were proposed in this pull request? MLlib ```GeneralizedLinearRegression``` ```family``` and ```link```

spark git commit: [SPARK-19291][SPARKR][ML] spark.gaussianMixture supports output log-likelihood.

2017-01-21 Thread yliang
Repository: spark Updated Branches: refs/heads/master 3dcad9fab -> 0c589e371 [SPARK-19291][SPARKR][ML] spark.gaussianMixture supports output log-likelihood. ## What changes were proposed in this pull request? ```spark.gaussianMixture``` supports output total log-likelihood for the model like

spark git commit: [SPARK-18929][ML] Add Tweedie distribution in GLM

2017-01-26 Thread yliang
Repository: spark Updated Branches: refs/heads/master 90817a6cd -> 4172ff80d [SPARK-18929][ML] Add Tweedie distribution in GLM ## What changes were proposed in this pull request? I propose to add the full Tweedie family into the GeneralizedLinearRegression model. The Tweedie family is

spark git commit: [SPARK-18285][SPARKR] SparkR approxQuantile supports input multiple columns

2017-02-17 Thread yliang
Repository: spark Updated Branches: refs/heads/master 1a3f5f8c5 -> b40659838 [SPARK-18285][SPARKR] SparkR approxQuantile supports input multiple columns ## What changes were proposed in this pull request? SparkR ```approxQuantile``` supports input multiple columns. ## How was this patch

spark git commit: [SPARK-18080][ML][PYTHON] Python API & Examples for Locality Sensitive Hashing

2017-02-15 Thread yliang
Repository: spark Updated Branches: refs/heads/master 21b4ba2d6 -> 08c1972a0 [SPARK-18080][ML][PYTHON] Python API & Examples for Locality Sensitive Hashing ## What changes were proposed in this pull request? This pull request includes python API and examples for LSH. The API changes was

spark git commit: [SPARK-14272][ML] Add Loglikelihood in GaussianMixtureSummary

2017-01-19 Thread yliang
Repository: spark Updated Branches: refs/heads/master 2e6256002 -> 8ccca9170 [SPARK-14272][ML] Add Loglikelihood in GaussianMixtureSummary ## What changes were proposed in this pull request? add loglikelihood in GMM.summary ## How was this patch tested? added tests Author: Zheng RuiFeng

spark git commit: [SPARK-19066][SPARKR] SparkR LDA doesn't set optimizer correctly

2017-01-16 Thread yliang
Repository: spark Updated Branches: refs/heads/master e635cbb6e -> 12c8c2160 [SPARK-19066][SPARKR] SparkR LDA doesn't set optimizer correctly ## What changes were proposed in this pull request? spark.lda passes the optimizer "em" or "online" as a string to the backend. However, LDAWrapper

spark git commit: [MINOR][YARN] Move YarnSchedulerBackendSuite to resource-managers/yarn directory.

2017-01-17 Thread yliang
Repository: spark Updated Branches: refs/heads/master 18ee55dd5 -> 84f0b645b [MINOR][YARN] Move YarnSchedulerBackendSuite to resource-managers/yarn directory. ## What changes were proposed in this pull request? #16092 moves YARN resource manager related code to resource-managers/yarn

spark git commit: [SPARK-17141][ML] MinMaxScaler should remain NaN value.

2016-08-19 Thread yliang
Repository: spark Updated Branches: refs/heads/master 5377fc623 -> 864be9359 [SPARK-17141][ML] MinMaxScaler should remain NaN value. ## What changes were proposed in this pull request? In the existing code, ```MinMaxScaler``` handle ```NaN``` value indeterminately. * If a column has identity

spark git commit: [SPARK-15018][PYSPARK][ML] Improve handling of PySpark Pipeline when used without stages

2016-08-20 Thread yliang
Repository: spark Updated Branches: refs/heads/master 45d40d9f6 -> 39f328ba3 [SPARK-15018][PYSPARK][ML] Improve handling of PySpark Pipeline when used without stages ## What changes were proposed in this pull request? When fitting a PySpark Pipeline without the `stages` param set, a

spark git commit: [SPARK-16961][FOLLOW-UP][SPARKR] More robust test case for spark.gaussianMixture.

2016-08-21 Thread yliang
Repository: spark Updated Branches: refs/heads/master 61ef74f22 -> 7f08a60b6 [SPARK-16961][FOLLOW-UP][SPARKR] More robust test case for spark.gaussianMixture. ## What changes were proposed in this pull request? #14551 fixed off-by-one bug in ```randomizeInPlace``` and some test failure

spark git commit: [MINOR][ML][DOC] Document default value for GeneralizedLinearRegression.linkPower

2017-02-25 Thread yliang
Repository: spark Updated Branches: refs/heads/master 410392ed7 -> 6ab60542e [MINOR][ML][DOC] Document default value for GeneralizedLinearRegression.linkPower Add Scaladoc for GeneralizedLinearRegression.linkPower default value Follow-up to https://github.com/apache/spark/pull/16344

spark git commit: [SPARK-19734][PYTHON][ML] Correct OneHotEncoder doc string to say dropLast

2017-03-01 Thread yliang
Repository: spark Updated Branches: refs/heads/master 3bd8ddf7c -> d2a879762 [SPARK-19734][PYTHON][ML] Correct OneHotEncoder doc string to say dropLast ## What changes were proposed in this pull request? Updates the doc string to match up with the code i.e. say dropLast instead of

spark git commit: [MINOR][ML] Fix comments in LSH Examples and Python API

2017-03-01 Thread yliang
Repository: spark Updated Branches: refs/heads/master de2b53df4 -> 3bd8ddf7c [MINOR][ML] Fix comments in LSH Examples and Python API ## What changes were proposed in this pull request? Remove `org.apache.spark.examples.` in Add slash in one of the python doc. ## How was this patch tested?

spark git commit: [SPARK-17090][FOLLOW-UP][ML] Add expert param support to SharedParamsCodeGen

2016-08-22 Thread yliang
Repository: spark Updated Branches: refs/heads/master 6d93f9e02 -> 37f0ab70d [SPARK-17090][FOLLOW-UP][ML] Add expert param support to SharedParamsCodeGen ## What changes were proposed in this pull request? Add expert param support to SharedParamsCodeGen where aggregationDepth a expert param

spark git commit: [SPARK-17197][ML][PYSPARK] PySpark LiR/LoR supports tree aggregation level configurable.

2016-08-25 Thread yliang
Repository: spark Updated Branches: refs/heads/master e0b20f9f2 -> 6b8cb1fe5 [SPARK-17197][ML][PYSPARK] PySpark LiR/LoR supports tree aggregation level configurable. ## What changes were proposed in this pull request? [SPARK-17090](https://issues.apache.org/jira/browse/SPARK-17090) makes

spark git commit: [MINOR][ML] Correct weights doc of MultilayerPerceptronClassificationModel.

2016-09-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master 6f13aa7df -> 39d538ddd [MINOR][ML] Correct weights doc of MultilayerPerceptronClassificationModel. ## What changes were proposed in this pull request? ```weights``` of ```MultilayerPerceptronClassificationModel``` should be the output

spark git commit: [MINOR][ML][MLLIB] Remove work around for breeze sparse matrix.

2016-09-04 Thread yliang
Repository: spark Updated Branches: refs/heads/master cdeb97a8c -> 1b001b520 [MINOR][ML][MLLIB] Remove work around for breeze sparse matrix. ## What changes were proposed in this pull request? Since we have updated breeze version to 0.12, we should remove work around for bug of breeze sparse

spark git commit: [SPARK-17464][SPARKR][ML] SparkR spark.als argument reg should be 0.1 by default.

2016-09-09 Thread yliang
Repository: spark Updated Branches: refs/heads/master 65b814bf5 -> 2ed601217 [SPARK-17464][SPARKR][ML] SparkR spark.als argument reg should be 0.1 by default. ## What changes were proposed in this pull request? SparkR ```spark.als``` arguments ```reg``` should be 0.1 by default, which need

spark git commit: [SPARK-17456][CORE] Utility for parsing Spark versions

2016-09-09 Thread yliang
Repository: spark Updated Branches: refs/heads/master 92ce8d484 -> 65b814bf5 [SPARK-17456][CORE] Utility for parsing Spark versions ## What changes were proposed in this pull request? This patch adds methods for extracting major and minor versions as Int types in Scala from a Spark version

spark git commit: [SPARK-15509][FOLLOW-UP][ML][SPARKR] R MLlib algorithms should support input columns "features" and "label"

2016-09-10 Thread yliang
Repository: spark Updated Branches: refs/heads/master 1fec3ce4e -> bcdd259c3 [SPARK-15509][FOLLOW-UP][ML][SPARKR] R MLlib algorithms should support input columns "features" and "label" ## What changes were proposed in this pull request? #13584 resolved the issue of features and label columns

spark git commit: [MINOR][SPARKR] Add sparkr-vignettes.html to gitignore.

2016-09-24 Thread yliang
Repository: spark Updated Branches: refs/heads/master 248916f55 -> 7945daed1 [MINOR][SPARKR] Add sparkr-vignettes.html to gitignore. ## What changes were proposed in this pull request? Add ```sparkr-vignettes.html``` to ```.gitignore```. ## How was this patch tested? No need test. Author:

spark git commit: [SPARK-14077][ML] Refactor NaiveBayes to support weighted instances

2016-09-30 Thread yliang
Repository: spark Updated Branches: refs/heads/master 74ac1c438 -> 1fad55968 [SPARK-14077][ML] Refactor NaiveBayes to support weighted instances ## What changes were proposed in this pull request? 1,support weighted data 2,use dataset/dataframe instead of rdd 3,make mllib as a wrapper to call

spark git commit: [MINOR][ML] Avoid 2D array flatten in NB training.

2016-10-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master b678e465a -> 7aeb20be7 [MINOR][ML] Avoid 2D array flatten in NB training. ## What changes were proposed in this pull request? Avoid 2D array flatten in ```NaiveBayes``` training, since flatten method might be expensive (It will create

spark git commit: [SPARK-17744][ML] Parity check between the ml and mllib test suites for NB

2016-10-04 Thread yliang
Repository: spark Updated Branches: refs/heads/master 7d5160883 -> c17f97183 [SPARK-17744][ML] Parity check between the ml and mllib test suites for NB ## What changes were proposed in this pull request? 1,parity check and add missing test suites for ml's NB 2,remove some unused imports ##

spark git commit: [SPARK-17792][ML] L-BFGS solver for linear regression does not accept general numeric label column types

2016-10-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master 49d11d499 -> 3713bb199 [SPARK-17792][ML] L-BFGS solver for linear regression does not accept general numeric label column types ## What changes were proposed in this pull request? Before, we computed `instances` in LinearRegression in

spark git commit: [SPARK-17792][ML] L-BFGS solver for linear regression does not accept general numeric label column types

2016-10-06 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 b1a9c41e8 -> 594a2cf6f [SPARK-17792][ML] L-BFGS solver for linear regression does not accept general numeric label column types ## What changes were proposed in this pull request? Before, we computed `instances` in LinearRegression

spark git commit: [SPARK-17585][PYSPARK][CORE] PySpark SparkContext.addFile supports adding files recursively

2016-09-21 Thread yliang
Repository: spark Updated Branches: refs/heads/master 61876a427 -> d3b886976 [SPARK-17585][PYSPARK][CORE] PySpark SparkContext.addFile supports adding files recursively ## What changes were proposed in this pull request? Users would like to add a directory as dependency in some cases, they

spark git commit: [SPARK-17315][FOLLOW-UP][SPARKR][ML] Fix print of Kolmogorov-Smirnov test summary

2016-09-21 Thread yliang
Repository: spark Updated Branches: refs/heads/master c133907c5 -> 6902edab7 [SPARK-17315][FOLLOW-UP][SPARKR][ML] Fix print of Kolmogorov-Smirnov test summary ## What changes were proposed in this pull request? #14881 added Kolmogorov-Smirnov Test wrapper to SparkR. I found that

spark git commit: [SPARK-17281][ML][MLLIB] Add treeAggregateDepth parameter for AFTSurvivalRegression

2016-09-22 Thread yliang
Repository: spark Updated Branches: refs/heads/master 646f38346 -> 72d9fba26 [SPARK-17281][ML][MLLIB] Add treeAggregateDepth parameter for AFTSurvivalRegression ## What changes were proposed in this pull request? Add treeAggregateDepth parameter for AFTSurvivalRegression to keep consistent

spark git commit: [MINOR][DOC] Fix wrong ml.feature.Normalizer document.

2016-08-24 Thread yliang
Repository: spark Updated Branches: refs/heads/master 92c0eaf34 -> 45b786aca [MINOR][DOC] Fix wrong ml.feature.Normalizer document. ## What changes were proposed in this pull request? The ```ml.feature.Normalizer``` examples illustrate L1 norm rather than L2, we should correct corresponding

spark git commit: [SPARK-16356][FOLLOW-UP][ML] Enforce ML test of exception for local/distributed Dataset.

2016-09-29 Thread yliang
Repository: spark Updated Branches: refs/heads/master 37eb9184f -> a19a1bb59 [SPARK-16356][FOLLOW-UP][ML] Enforce ML test of exception for local/distributed Dataset. ## What changes were proposed in this pull request? #14035 added ```testImplicits``` to ML unit tests and promoted

[2/2] spark git commit: [SPARK-16356][ML] Add testImplicits for ML unit tests and promote toDF()

2016-09-26 Thread yliang
[SPARK-16356][ML] Add testImplicits for ML unit tests and promote toDF() ## What changes were proposed in this pull request? This was suggested in https://github.com/apache/spark/commit/101663f1ae222a919fc40510aa4f2bad22d1be6f#commitcomment-17114968. This PR adds `testImplicits` to

[1/2] spark git commit: [SPARK-16356][ML] Add testImplicits for ML unit tests and promote toDF()

2016-09-26 Thread yliang
Repository: spark Updated Branches: refs/heads/master 50b89d05b -> f234b7cd7 http://git-wip-us.apache.org/repos/asf/spark/blob/f234b7cd/mllib/src/test/scala/org/apache/spark/ml/feature/StringIndexerSuite.scala -- diff --git

spark git commit: [SPARK-17577][FOLLOW-UP][SPARKR] SparkR spark.addFile supports adding directory recursively

2016-09-26 Thread yliang
Repository: spark Updated Branches: refs/heads/master 00be16df6 -> 93c743f1a [SPARK-17577][FOLLOW-UP][SPARKR] SparkR spark.addFile supports adding directory recursively ## What changes were proposed in this pull request? #15140 exposed ```JavaSparkContext.addFile(path: String, recursive:

spark git commit: [SPARK-17138][ML][MLIB] Add Python API for multinomial logistic regression

2016-09-27 Thread yliang
Repository: spark Updated Branches: refs/heads/master 85b0a1575 -> 7f16affa2 [SPARK-17138][ML][MLIB] Add Python API for multinomial logistic regression ## What changes were proposed in this pull request? Add Python API for multinomial logistic regression. - add `family` param in python api.

spark git commit: [SPARK-17704][ML][MLLIB] ChiSqSelector performance improvement.

2016-09-29 Thread yliang
Repository: spark Updated Branches: refs/heads/master a19a1bb59 -> f7082ac12 [SPARK-17704][ML][MLLIB] ChiSqSelector performance improvement. ## What changes were proposed in this pull request? Several performance improvement for ```ChiSqSelector```: 1, Keep ```selectedFeatures``` ordered

spark git commit: [SPARK-14077][ML][FOLLOW-UP] Revert change for NB Model's Load to maintain compatibility with the model stored before 2.0

2016-09-30 Thread yliang
Repository: spark Updated Branches: refs/heads/master 1fad55968 -> 8e491af52 [SPARK-14077][ML][FOLLOW-UP] Revert change for NB Model's Load to maintain compatibility with the model stored before 2.0 ## What changes were proposed in this pull request? Revert change for NB Model's Load to

spark git commit: [SPARK-17748][FOLLOW-UP][ML] Reorg variables of WeightedLeastSquares.

2016-10-26 Thread yliang
Repository: spark Updated Branches: refs/heads/master 4bee95407 -> 312ea3f7f [SPARK-17748][FOLLOW-UP][ML] Reorg variables of WeightedLeastSquares. ## What changes were proposed in this pull request? This is follow-up work of #15394. Reorg some variables of ```WeightedLeastSquares``` and fix

spark git commit: [SPARK-18291][SPARKR][ML] SparkR glm predict should output original label when family = binomial.

2016-11-07 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 df40ee2b4 -> 6b332909f [SPARK-18291][SPARKR][ML] SparkR glm predict should output original label when family = binomial. ## What changes were proposed in this pull request? SparkR ```spark.glm``` predict should output original label

spark git commit: [SPARK-18276][ML] ML models should copy the training summary and set parent

2016-11-05 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 e9f1d4aaa -> c42301f1e [SPARK-18276][ML] ML models should copy the training summary and set parent ## What changes were proposed in this pull request? Only some of the models which contain a training summary currently set the

spark git commit: [SPARK-18276][ML] ML models should copy the training summary and set parent

2016-11-05 Thread yliang
Repository: spark Updated Branches: refs/heads/master 15d392688 -> 23ce0d1e9 [SPARK-18276][ML] ML models should copy the training summary and set parent ## What changes were proposed in this pull request? Only some of the models which contain a training summary currently set the summaries

spark git commit: [SPARK-18210][ML] Pipeline.copy does not create an instance with the same UID

2016-11-06 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 dcbf3fd4b -> d2f2cf68a [SPARK-18210][ML] Pipeline.copy does not create an instance with the same UID ## What changes were proposed in this pull request? Motivation: `org.apache.spark.ml.Pipeline.copy(extra: ParamMap)` does not create

spark git commit: [SPARK-18210][ML] Pipeline.copy does not create an instance with the same UID

2016-11-06 Thread yliang
Repository: spark Updated Branches: refs/heads/master 340f09d10 -> b89d0556d [SPARK-18210][ML] Pipeline.copy does not create an instance with the same UID ## What changes were proposed in this pull request? Motivation: `org.apache.spark.ml.Pipeline.copy(extra: ParamMap)` does not create an

spark git commit: [SPARK-18291][SPARKR][ML] SparkR glm predict should output original label when family = binomial.

2016-11-07 Thread yliang
Repository: spark Updated Branches: refs/heads/master a814eeac6 -> daa975f4b [SPARK-18291][SPARKR][ML] SparkR glm predict should output original label when family = binomial. ## What changes were proposed in this pull request? SparkR ```spark.glm``` predict should output original label when

spark git commit: [SPARK-18401][SPARKR][ML] SparkR random forest should support output original label.

2016-11-10 Thread yliang
Repository: spark Updated Branches: refs/heads/master a3356343c -> 5ddf69470 [SPARK-18401][SPARKR][ML] SparkR random forest should support output original label. ## What changes were proposed in this pull request? SparkR ```spark.randomForest``` classification prediction should output

spark git commit: [SPARK-18401][SPARKR][ML] SparkR random forest should support output original label.

2016-11-10 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 064d4315f -> 51dca6143 [SPARK-18401][SPARKR][ML] SparkR random forest should support output original label. ## What changes were proposed in this pull request? SparkR ```spark.randomForest``` classification prediction should output

spark git commit: [SPARK-14634][ML] Add BisectingKMeansSummary

2016-10-14 Thread yliang
Repository: spark Updated Branches: refs/heads/master 1db8feab8 -> a1b136d05 [SPARK-14634][ML] Add BisectingKMeansSummary ## What changes were proposed in this pull request? Add BisectingKMeansSummary ## How was this patch tested? unit test Author: Zheng RuiFeng

spark git commit: [SPARK-15402][ML][PYSPARK] PySpark ml.evaluation should support save/load

2016-10-14 Thread yliang
Repository: spark Updated Branches: refs/heads/master 2fb12b0a3 -> 1db8feab8 [SPARK-15402][ML][PYSPARK] PySpark ml.evaluation should support save/load ## What changes were proposed in this pull request? Since ```ml.evaluation``` has supported save/load at Scala side, supporting it at Python

spark git commit: [SPARK-17748][ML] One pass solver for Weighted Least Squares with ElasticNet

2016-10-25 Thread yliang
Repository: spark Updated Branches: refs/heads/master 483c37c58 -> 78d740a08 [SPARK-17748][ML] One pass solver for Weighted Least Squares with ElasticNet ## What changes were proposed in this pull request? 1. Make a pluggable solver interface for `WeightedLeastSquares` 2. Add a `QuasiNewton`

spark git commit: [SPARK-14634][ML][FOLLOWUP] Delete superfluous line in BisectingKMeans

2016-10-25 Thread yliang
Repository: spark Updated Branches: refs/heads/master 6f31833db -> 38cdd6ccd [SPARK-14634][ML][FOLLOWUP] Delete superfluous line in BisectingKMeans ## What changes were proposed in this pull request? As commented by jkbradley in https://github.com/apache/spark/pull/12394,

spark git commit: [SPARK-17748][FOLLOW-UP][ML] Fix build error for Scala 2.10.

2016-10-25 Thread yliang
Repository: spark Updated Branches: refs/heads/master 38cdd6ccd -> ac8ff920f [SPARK-17748][FOLLOW-UP][ML] Fix build error for Scala 2.10. ## What changes were proposed in this pull request? #15394 introduced build error for Scala 2.10, this PR fix it. ## How was this patch tested? Existing

spark git commit: [SPARK-17986][ML] SQLTransformer should remove temporary tables

2016-10-22 Thread yliang
Repository: spark Updated Branches: refs/heads/master 01b26a064 -> ab3363e9f [SPARK-17986][ML] SQLTransformer should remove temporary tables ## What changes were proposed in this pull request? A call to the method `SQLTransformer.transform` previously would create a temporary table and

spark git commit: [SPARK-17986][ML] SQLTransformer should remove temporary tables

2016-10-22 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 a0c03c925 -> b959dab32 [SPARK-17986][ML] SQLTransformer should remove temporary tables ## What changes were proposed in this pull request? A call to the method `SQLTransformer.transform` previously would create a temporary table and

spark git commit: [SPARK-14077][ML][FOLLOW-UP] Minor refactor and cleanup for NaiveBayes

2016-11-12 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 893355143 -> b2ba83d10 [SPARK-14077][ML][FOLLOW-UP] Minor refactor and cleanup for NaiveBayes ## What changes were proposed in this pull request? * Refactor out ```trainWithLabelCheck``` and make ```mllib.NaiveBayes``` call into it. *

spark git commit: [SPARK-14077][ML][FOLLOW-UP] Minor refactor and cleanup for NaiveBayes

2016-11-12 Thread yliang
Repository: spark Updated Branches: refs/heads/master bc41d997e -> 22cb3a060 [SPARK-14077][ML][FOLLOW-UP] Minor refactor and cleanup for NaiveBayes ## What changes were proposed in this pull request? * Refactor out ```trainWithLabelCheck``` and make ```mllib.NaiveBayes``` call into it. *

spark git commit: [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 3be2d1e0b -> fc5fee83e [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data ## What changes were proposed in this pull request? * Fix SparkR ```spark.glm``` errors when fitting on collinear data, since

spark git commit: [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/master d0212eb0f -> 982b82e32 [SPARK-18501][ML][SPARKR] Fix spark.glm errors when fitting on collinear data ## What changes were proposed in this pull request? * Fix SparkR ```spark.glm``` errors when fitting on collinear data, since ```standard

spark git commit: [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package.

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 aaa2a173a -> c70214075 [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package. ## What changes were proposed in this pull request? When running SparkR job in yarn-cluster mode, it will download

spark git commit: [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package.

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.0 9dad3a7b0 -> a37238b06 [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package. ## What changes were proposed in this pull request? When running SparkR job in yarn-cluster mode, it will download

spark git commit: [SPARK-18520][ML] Add missing setXXXCol methods for BisectingKMeansModel and GaussianMixtureModel

2016-11-24 Thread yliang
Repository: spark Updated Branches: refs/heads/master 223fa218e -> 2dfabec38 [SPARK-18520][ML] Add missing setXXXCol methods for BisectingKMeansModel and GaussianMixtureModel ## What changes were proposed in this pull request? add `setFeaturesCol` and `setPredictionCol` for BiKModel and

spark git commit: [SPARK-18520][ML] Add missing setXXXCol methods for BisectingKMeansModel and GaussianMixtureModel

2016-11-24 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 27d81d000 -> 04ec74f12 [SPARK-18520][ML] Add missing setXXXCol methods for BisectingKMeansModel and GaussianMixtureModel ## What changes were proposed in this pull request? add `setFeaturesCol` and `setPredictionCol` for BiKModel and

spark git commit: [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package.

2016-11-22 Thread yliang
Repository: spark Updated Branches: refs/heads/master ebeb0830a -> acb971577 [SPARK-18444][SPARKR] SparkR running in yarn-cluster mode should not download Spark package. ## What changes were proposed in this pull request? When running SparkR job in yarn-cluster mode, it will download Spark

spark git commit: [SPARK-18438][SPARKR][ML] spark.mlp should support RFormula.

2016-11-16 Thread yliang
Repository: spark Updated Branches: refs/heads/master 4ac9759f8 -> 95eb06bd7 [SPARK-18438][SPARKR][ML] spark.mlp should support RFormula. ## What changes were proposed in this pull request? ```spark.mlp``` should support ```RFormula``` like other ML algorithm wrappers. BTW, I did some cleanup

spark git commit: [SPARK-18438][SPARKR][ML] spark.mlp should support RFormula.

2016-11-16 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 436ae201f -> 7b57e480d [SPARK-18438][SPARKR][ML] spark.mlp should support RFormula. ## What changes were proposed in this pull request? ```spark.mlp``` should support ```RFormula``` like other ML algorithm wrappers. BTW, I did some

spark git commit: [SPARK-18434][ML] Add missing ParamValidations for ML algos

2016-11-16 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 820847008 -> 6b6eb4e52 [SPARK-18434][ML] Add missing ParamValidations for ML algos ## What changes were proposed in this pull request? Add missing ParamValidations for ML algos ## How was this patch tested? existing tests Author:

spark git commit: [SPARK-18434][ML] Add missing ParamValidations for ML algos

2016-11-16 Thread yliang
Repository: spark Updated Branches: refs/heads/master 241e04bc0 -> c68f1a38a [SPARK-18434][ML] Add missing ParamValidations for ML algos ## What changes were proposed in this pull request? Add missing ParamValidations for ML algos ## How was this patch tested? existing tests Author: Zheng

spark git commit: [SPARK-18412][SPARKR][ML] Fix exception for some SparkR ML algorithms training on libsvm data

2016-11-13 Thread yliang
Repository: spark Updated Branches: refs/heads/master b91a51bb2 -> 07be232ea [SPARK-18412][SPARKR][ML] Fix exception for some SparkR ML algorithms training on libsvm data ## What changes were proposed in this pull request? * Fix the following exceptions which throws when

spark git commit: [SPARK-18412][SPARKR][ML] Fix exception for some SparkR ML algorithms training on libsvm data

2016-11-13 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 0c69224ed -> 8fc6455c0 [SPARK-18412][SPARKR][ML] Fix exception for some SparkR ML algorithms training on libsvm data ## What changes were proposed in this pull request? * Fix the following exceptions which throws when

spark git commit: [SPARK-18282][ML][PYSPARK] Add python clustering summaries for GMM and BKM

2016-11-21 Thread yliang
Repository: spark Updated Branches: refs/heads/master 658547974 -> e811fbf9e [SPARK-18282][ML][PYSPARK] Add python clustering summaries for GMM and BKM ## What changes were proposed in this pull request? Add model summary APIs for `GaussianMixtureModel` and `BisectingKMeansModel` in

spark git commit: [SPARK-18282][ML][PYSPARK] Add python clustering summaries for GMM and BKM

2016-11-21 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 fb4e6359d -> 31002e4a7 [SPARK-18282][ML][PYSPARK] Add python clustering summaries for GMM and BKM ## What changes were proposed in this pull request? Add model summary APIs for `GaussianMixtureModel` and `BisectingKMeansModel` in

spark git commit: [SPARK-18109][ML] Add instrumentation to GMM

2016-10-28 Thread yliang
Repository: spark Updated Branches: refs/heads/master ab5f938bc -> 569788a55 [SPARK-18109][ML] Add instrumentation to GMM ## What changes were proposed in this pull request? Add instrumentation to GMM ## How was this patch tested? Test in spark-shell Author: Zheng RuiFeng

spark git commit: [SPARK-18133][EXAMPLES][ML] Python ML Pipeline Example has syntax e…

2016-10-28 Thread yliang
Repository: spark Updated Branches: refs/heads/master 569788a55 -> e9746f87d [SPARK-18133][EXAMPLES][ML] Python ML Pipeline Example has syntax e… ## What changes were proposed in this pull request? In Python 3, there is only one integer type (i.e., int), which mostly behaves like the long

spark git commit: [SPARK-18177][ML][PYSPARK] Add missing 'subsamplingRate' of pyspark GBTClassifier

2016-11-03 Thread yliang
Repository: spark Updated Branches: refs/heads/master 0ea5d5b24 -> 9dc9f9a5d [SPARK-18177][ML][PYSPARK] Add missing 'subsamplingRate' of pyspark GBTClassifier ## What changes were proposed in this pull request? Add missing 'subsamplingRate' of pyspark GBTClassifier ## How was this patch

spark git commit: [SPARK-18177][ML][PYSPARK] Add missing 'subsamplingRate' of pyspark GBTClassifier

2016-11-03 Thread yliang
Repository: spark Updated Branches: refs/heads/branch-2.1 71104c9c9 -> 99891e56e [SPARK-18177][ML][PYSPARK] Add missing 'subsamplingRate' of pyspark GBTClassifier ## What changes were proposed in this pull request? Add missing 'subsamplingRate' of pyspark GBTClassifier ## How was this patch

  1   2   3   >