Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/4011#issuecomment-69673239
  
    This fixed the PySpark unit tests, but the build is still failing due to a 
MiMa issue (which we hadn't noticed because PySpark failures were preventing 
those checks from running):
    
    ```
    [error]  * method localAccums()scala.collection.mutable.Map in object 
org.apache.spark.Accumulators has now a different result type; was: 
scala.collection.mutable.Map, is now: java.lang.ThreadLocal
    [error]    filter with: 
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.Accumulators.localAccums")
    [info] spark-streaming-zeromq: found 0 potential binary incompatibilities
    [info] spark-streaming-kafka: found 0 potential binary incompatibilities 
(filtered 3)
    [info] spark-streaming-mqtt: found 0 potential binary incompatibilities
    [info] spark-streaming-twitter: found 0 potential binary incompatibilities
    [info] spark-streaming-flume: found 0 potential binary incompatibilities 
(filtered 2)
    [info] spark-streaming: found 0 potential binary incompatibilities 
(filtered 5)
    [info] spark-mllib: found 0 potential binary incompatibilities (filtered 45)
    java.lang.RuntimeException: spark-core: Binary compatibility check failed!
        at scala.sys.package$.error(package.scala:27)
        at 
com.typesafe.tools.mima.plugin.SbtMima$.reportErrors(SbtMima.scala:64)
        at 
com.typesafe.tools.mima.plugin.MimaPlugin$$anonfun$mimaReportSettings$3.apply(MimaPlugin.scala:23)
        at 
com.typesafe.tools.mima.plugin.MimaPlugin$$anonfun$mimaReportSettings$3.apply(MimaPlugin.scala:23)
        at scala.Function5$$anonfun$tupled$1.apply(Function5.scala:35)
        at scala.Function5$$anonfun$tupled$1.apply(Function5.scala:34)
        at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
        at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
        at sbt.std.Transform$$anon$4.work(System.scala:64)
        at 
sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
        at 
sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
        at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
        at sbt.Execute.work(Execute.scala:244)
        at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
        at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
        at 
sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
        at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
    [error] (core/*:mimaReportBinaryIssues) spark-core: Binary compatibility 
check failed!
    [error] Total time: 43 s, completed Jan 12, 2015 4:11:07 PM
    ```
    
    Fixing this should just be a matter of backporting an exclude from another 
branch, since the method in question was not public (this is due to how MiMa 
handles public fields in `private[spark]` classes).
    
    I'm going to merge this now, then push a hotfix commit to address the MiMa 
issue.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to