[ 
https://issues.apache.org/jira/browse/MESOS-1069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13922981#comment-13922981
 ] 

Bernardo Gomez Palacio commented on MESOS-1069:
-----------------------------------------------

[~vinodkone] you need to compile both the .so and .jar file to make it work 
with Spark. I think requesting users to migrate to 0.17.0 is a fair solution. 
In my case it was not that simple and had to stick with the previous version 
due the way we handle things internally.

If you want a deeper answer the truth is that if Spark was incapable of 
isolating the protobuf version it uses to communicate with Mesos vs HDFS. If it 
could it would have worked fine with Mesos 0.16.0 as is but the Spark build 
will clobber older protobuf versions and just stick with one. This behavior, 
inability to run different versions of the same library, is common in 
applications that run on JVMs.

> Branch for 0.16.0 with Protobufs 2.5.0
> --------------------------------------
>
>                 Key: MESOS-1069
>                 URL: https://issues.apache.org/jira/browse/MESOS-1069
>             Project: Mesos
>          Issue Type: Improvement
>    Affects Versions: 0.16.0
>            Reporter: Bernardo Gomez Palacio
>            Assignee: Bernardo Gomez Palacio
>
> If you deploy Mesos 0.16.0 on a Hadoop 2.x cluster with Spark 0.9.0+ you will 
> start getting stack-dumps similar to 
> [dump.log|https://gist.github.com/berngp/c16c56516cb40d9a78fe]
> Without going into much detail the hadoop-client.jar for 2.x now requires
> Protobuf 2.5.0 and therefore Spark will require the same protobuf
> version. Due this you will need to run Spark on Mesos with support of
> Protobuf 2.5.0.
> If you want to use Mesos 0.16.0, which is the latest stable release, you will 
> need support of Protobufs 2.5.0 on Mesos 0.16.0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to