Oh, interesting. I've never heard of that sort of architecture. And I'm
not sure exactly how the JNI bindings do the native library discovery, but
I know the MESOS_NATIVE_JAVA_LIBRARY env var has always been the documented
discovery method, so I'd definitely always provide that if I were you.
nop, there is no "distribution", no spark-submit at the start of my process.But
I found the problem, the behavior when loading mesos native dependency changed,
and the static initialization block inside org.apache.mesos.MesosSchedulerDriver
needed the specific reference to libmesos-1.0.0.so.
So
What do you mean your driver has all the dependencies packaged? What are
"all the dependencies"? Is the distribution you use to launch your driver
built with -Pmesos?
On Tue, Jan 10, 2017 at 12:18 PM, Olivier Girardot <
o.girar...@lateral-thoughts.com> wrote:
> Hi Michael,
> I did so, but it's
Hi Michael,I did so, but it's not exactly the problem, you see my driver has
all the dependencies packaged, and only the executors fetch via the
spark.executor.uri the tgz,The strange thing is that I see in my classpath the
org.apache.mesos:mesos-1.0.0-shaded-protobuf dependency packaged in the
Just build with -Pmesos
http://spark.apache.org/docs/latest/building-spark.html#building-with-mesos-support
On Tue, Jan 10, 2017 at 8:56 AM, Olivier Girardot <
o.girar...@lateral-thoughts.com> wrote:
> I had the same problem, added spark-mesos as dependency and now I get :
> [2017-01-10
I had the same problem, added spark-mesos as dependency and now I get :
[2017-01-10 17:45:16,575] {bash_operator.py:77} INFO - Exception in thread
"main" java.lang.NoClassDefFoundError: Could not initialize class
org.apache.mesos.MesosSchedulerDriver[2017-01-10 17:45:16,576]
{bash_operator.py:77}
Glad that you found it.
ᐧ
On Mon, Jan 9, 2017 at 3:29 PM, Richard Siebeling
wrote:
> Probably found it, it turns out that Mesos should be explicitly added
> while building Spark, I assumed I could use the old build command that I
> used for building Spark 2.0.0... Didn't
Probably found it, it turns out that Mesos should be explicitly added while
building Spark, I assumed I could use the old build command that I used for
building Spark 2.0.0... Didn't see the two lines added in the
documentation...
Maybe these kind of changes could be added in the changelog under
Hi,
I'm setting up Apache Spark 2.1.0 on Mesos and I am getting a "Could not
parse Master URL: 'mesos://xx.xx.xxx.xxx:5050'" error.
Mesos is running fine (both the master as the slave, it's a single machine
configuration).
I really don't understand why this is happening since the same