[ 
https://issues.apache.org/jira/browse/SPARK-13959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280013#comment-15280013
 ] 

Nick Pentreath edited comment on SPARK-13959 at 5/11/16 11:56 AM:
------------------------------------------------------------------

All {{ML}} potential errors are related to {{DataFrame}} -> {{Dataset}} change:
{code}
[info] spark-mllib: found 4 potential binary incompatibilities while checking 
against org.apache.spark:spark-mllib_2.11:1.6.0  (filtered 151)
[error]  * method 
transform(org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame in 
class org.apache.spark.ml.UnaryTransformer's type is different in current 
version, where it is (org.apache.spark.sql.Dataset)org.apache.spark.sql.Dataset 
instead of (org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.UnaryTransformer.transform")
[error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel
 in class org.apache.spark.ml.classification.DecisionTreeClassifier's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.DecisionTreeClassifier.train")
[error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel
 in class org.apache.spark.ml.classification.LogisticRegression's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.LogisticRegression.train")
[error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel
 in class org.apache.spark.ml.regression.DecisionTreeRegressor's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.regression.DecisionTreeRegressor.train")
{code}

It's weird that the above errors for {{DecisionTreeClassifier.train}}, 
{{DecisionTreeRegressor.train}} and {{LogisticRegression.train}} refer to a 
return type of {{PredictionModel}} in the "current version". But in any case I 
verified the return types have not changed for those methods 


was (Author: mlnick):
All {{ML}} potential errors are related to {{DataFrame}} -> {{Dataset}} change:
{code}
[info] spark-mllib: found 4 potential binary incompatibilities while checking 
against org.apache.spark:spark-mllib_2.11:1.6.0  (filtered 151)
[error]  * method 
transform(org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame in 
class org.apache.spark.ml.UnaryTransformer's type is different in current 
version, where it is (org.apache.spark.sql.Dataset)org.apache.spark.sql.Dataset 
instead of (org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.UnaryTransformer.transform")
[error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel
 in class org.apache.spark.ml.classification.DecisionTreeClassifier's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.DecisionTreeClassifier.train")
[error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel
 in class org.apache.spark.ml.classification.LogisticRegression's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.LogisticRegression.train")
[error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel
 in class org.apache.spark.ml.regression.DecisionTreeRegressor's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel
[error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.regression.DecisionTreeRegressor.train")
{code}

It's weird that the above errors for {{DecisionTreeClassifier.train}}, 
{{DecisionTreeRegressor.train}} and {{LogisticRegression.train}} refer to 
{{PredictionModel}} in the "current version". But in any case I verified the 
return types have not changed for those methods 

> Audit MiMa excludes added in SPARK-13948 to make sure none are unintended 
> incompatibilities
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-13959
>                 URL: https://issues.apache.org/jira/browse/SPARK-13959
>             Project: Spark
>          Issue Type: Task
>          Components: Build
>            Reporter: Josh Rosen
>            Priority: Critical
>
> The patch for SPARK-13948 added a number of MiMa excludes for cases which 
> were missed due to our old way of programatically generating excludes in 
> GenerateMIMAIgnore.
> Before Spark 2.0, we need to audit the additional excludes that I added to 
> make sure that none represent unintentional incompatibilities which should be 
> fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to