Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/11774#issuecomment-197619692
  
    Here were the errors that I ignored (we should follow up on these later):
    
    ```
    [error]  * method initializeLogIfNecessary(Boolean)Unit in trait 
org.apache.spark.Logging is present only in current version
    [error]    filter with: 
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.Logging.initializeLogIfNecessary")
    [error]  * deprecated method 
lookupTimeout(org.apache.spark.SparkConf)scala.concurrent.duration.FiniteDuration
 in object org.apache.spark.util.RpcUtils does not have a correspondent in 
current version
    [error]    filter with: 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.util.RpcUtils.lookupTimeout")
    [error]  * deprecated method 
askTimeout(org.apache.spark.SparkConf)scala.concurrent.duration.FiniteDuration 
in object org.apache.spark.util.RpcUtils does not have a correspondent in 
current version
    [error]    filter with: 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.util.RpcUtils.askTimeout")
    [error]  * method logEvent()Boolean in trait 
org.apache.spark.scheduler.SparkListenerEvent is present only in current version
    [error]    filter with: 
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.scheduler.SparkListenerEvent.logEvent")
    [info] spark-mllib: found 4 potential binary incompatibilities while 
checking against org.apache.spark:spark-mllib_2.11:1.6.0  (filtered 151)
    [error]  * method 
transform(org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame in 
class org.apache.spark.ml.UnaryTransformer's type is different in current 
version, where it is (org.apache.spark.sql.Dataset)org.apache.spark.sql.Dataset 
instead of (org.apache.spark.sql.DataFrame)org.apache.spark.sql.DataFrame
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.UnaryTransformer.transform")
    [error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel
 in class org.apache.spark.ml.classification.DecisionTreeClassifier's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.DecisionTreeClassificationModel
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.DecisionTreeClassifier.train")
    [error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel
 in class org.apache.spark.ml.classification.LogisticRegression's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.classification.LogisticRegressionModel
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.classification.LogisticRegression.train")
    [error]  * method 
train(org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel
 in class org.apache.spark.ml.regression.DecisionTreeRegressor's type is 
different in current version, where it is 
(org.apache.spark.sql.Dataset)org.apache.spark.ml.PredictionModel instead of 
(org.apache.spark.sql.DataFrame)org.apache.spark.ml.regression.DecisionTreeRegressionModel
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.regression.DecisionTreeRegressor.train")
    [info] spark-sql: found 10 potential binary incompatibilities while 
checking against org.apache.spark:spark-sql_2.11:1.6.0  (filtered 658)
    [error]  * method toDF()org.apache.spark.sql.DataFrame in class 
org.apache.spark.sql.Dataset has a different result type in current version, 
where it is org.apache.spark.sql.Dataset rather than 
org.apache.spark.sql.DataFrame
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.toDF")
    [error]  * method 
groupBy(org.apache.spark.api.java.function.MapFunction,org.apache.spark.sql.Encoder)org.apache.spark.sql.GroupedDataset
 in class org.apache.spark.sql.Dataset in current version does not have a 
correspondent with same parameter signature among 
(java.lang.String,scala.collection.Seq)org.apache.spark.sql.GroupedData, 
(java.lang.String,Array[java.lang.String])org.apache.spark.sql.GroupedData
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method 
groupBy(scala.collection.Seq)org.apache.spark.sql.GroupedDataset in class 
org.apache.spark.sql.Dataset has a different result type in current version, 
where it is org.apache.spark.sql.GroupedData rather than 
org.apache.spark.sql.GroupedDataset
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method 
groupBy(scala.Function1,org.apache.spark.sql.Encoder)org.apache.spark.sql.GroupedDataset
 in class org.apache.spark.sql.Dataset in current version does not have a 
correspondent with same parameter signature among 
(java.lang.String,scala.collection.Seq)org.apache.spark.sql.GroupedData, 
(java.lang.String,Array[java.lang.String])org.apache.spark.sql.GroupedData
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method 
groupBy(Array[org.apache.spark.sql.Column])org.apache.spark.sql.GroupedDataset 
in class org.apache.spark.sql.Dataset has a different result type in current 
version, where it is org.apache.spark.sql.GroupedData rather than 
org.apache.spark.sql.GroupedDataset
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.groupBy")
    [error]  * method 
select(scala.collection.Seq)org.apache.spark.sql.DataFrame in class 
org.apache.spark.sql.Dataset has a different result type in current version, 
where it is org.apache.spark.sql.Dataset rather than 
org.apache.spark.sql.DataFrame
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.select")
    [error]  * method 
select(Array[org.apache.spark.sql.Column])org.apache.spark.sql.DataFrame in 
class org.apache.spark.sql.Dataset has a different result type in current 
version, where it is org.apache.spark.sql.Dataset rather than 
org.apache.spark.sql.DataFrame
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.Dataset.select")
    [error]  * method toDS()org.apache.spark.sql.Dataset in class 
org.apache.spark.sql.Dataset does not have a correspondent in current version
    [error]    filter with: 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.Dataset.toDS")
    [error]  * abstract method 
newInstance(java.lang.String,org.apache.spark.sql.types.StructType,org.apache.hadoop.mapreduce.TaskAttemptContext)org.apache.spark.sql.sources.OutputWriter
 in class org.apache.spark.sql.sources.OutputWriterFactory does not have a 
correspondent in current version
    [error]    filter with: 
ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.sql.sources.OutputWriterFactory.newInstance")
    [error]  * abstract method 
newInstance(java.lang.String,scala.Option,org.apache.spark.sql.types.StructType,org.apache.hadoop.mapreduce.TaskAttemptContext)org.apache.spark.sql.sources.OutputWriter
 in class org.apache.spark.sql.sources.OutputWriterFactory is present only in 
current version
    [error]    filter with: 
ProblemFilters.exclude[ReversedMissingMethodProblem]("org.apache.spark.sql.sources.OutputWriterFactory.newInstance")
    
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to