No of course, but I was guessing some native libs imported (to communicate with Mesos) in the project that... could miserably crash the JVM.
Anyway, so you tell us that using this oracle version, you don't have any issues when using spark on mesos 0.18.0, that's interesting 'cause AFAIR, my last test (done by night, which means floating and eventual memory) I was using this particular version as well. Just to make thing clear, Sean, you're using spark 0.9.1 on Mesos 0.18.0 with Hadoop 2.x (x >= 2) without any modification than just specifying against which version of hadoop you had run make-distribution? Thanks for your help, Andy On Thu, Apr 17, 2014 at 9:11 PM, Sean Owen <so...@cloudera.com> wrote: > I don't know if it's anything you or the project is missing... that's > just a JDK bug. > FWIW I am on 1.7.0_51 and have not seen anything like that. > > I don't think it's a protobuf issue -- you don't crash the JVM with > simple version incompatibilities :) > -- > Sean Owen | Director, Data Science | London > > > On Thu, Apr 17, 2014 at 7:29 PM, Steven Cox <s...@renci.org> wrote: > > So I tried a fix found on the list... > > > > "The issue was due to meos version mismatch as I am using latest mesos > > 0.17.0, but spark uses 0.13.0. > > Fixed by updating the SparkBuild.scala to latest version." > > > > I changed this line in SparkBuild.scala > > "org.apache.mesos" % "mesos" % "0.13.0", > > to > > "org.apache.mesos" % "mesos" % "0.18.0", > > > > ...ran make-distribution.sh, repackaged and redeployed the tar.gz to > HDFS. > > > > It still core dumps like this: > > https://gist.github.com/stevencox/11002498 > > > > In this environment: > > Ubuntu 13.10 > > Mesos 0.18.0 > > Spark 0.9.1 > > JDK 1.7.0_45 > > Scala 2.10.1 > > > > What am I missing? >