[
https://issues.apache.org/jira/browse/MESOS-6914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256218#comment-16256218
]
Damien Gerard commented on MESOS-6914:
--------------------------------------
Logs:
{noformat}
Nov 17 09:30:25 ***** mesos-slave[63839]: I1117 09:30:25.006713 63839
systemd.cpp:326] Started systemd slice `mesos_executors.slice`
Nov 17 09:30:25 ***** mesos-slave[63839]: I1117 09:30:25.007498 63839
resolver.cpp:69] Creating default secret resolver
Nov 17 09:30:25 ***** mesos-slave[63839]: I1117 09:30:25.100229 63839
containerizer.cpp:246] Using isolation:
disk/du,docker/runtime,filesystem/linux,namespaces/pid,volume/sandbox_path,network/cni,volume/image,environment_secret
Nov 17 09:30:25 ***** mesos-slave[63839]: I1117 09:30:25.106936 63839
linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy
for the Linux launcher
Nov 17 09:30:25 ***** mesos-slave[63839]: E1117 09:30:25.112092 63839
shell.hpp:107] Command 'hadoop version 2>&1' failed; this is the output:
Nov 17 09:30:25 ***** mesos-slave[63839]: sh: 1: hadoop: not found
Nov 17 09:30:25 ***** mesos-slave[63839]: I1117 09:30:25.112385 63839
fetcher.cpp:69] Skipping URI fetcher plugin 'hadoop' as it could not be
created: Failed to create HDFS client: Failed to execute 'hadoop version 2>&1';
the command was either not found or exited with a non-zero exit status: 127
Nov 17 09:30:25 ***** mesos-slave[63839]: I1117 09:30:25.112963 63839
provisioner.cpp:255] Using default backend 'overlay'
{noformat}
So basically this hadoop/hdfs client has not its place here.
> Command 'hadoop version 2>&1' failed
> ------------------------------------
>
> Key: MESOS-6914
> URL: https://issues.apache.org/jira/browse/MESOS-6914
> Project: Mesos
> Issue Type: Bug
> Reporter: yangjunfeng
>
> I am green hand in spark on mesos.
> when I run spark-shell on mesos. The error is below:
> Command 'hadoop version 2>&1' failed; this is the output:
> sh: hadoop: command not found
> Failed to fetch
> 'hdfs://188.188.0.189:9000/usr/yjf/spark-2.1.0-bin-hadoop2.7.tgz': Failed to
> create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was
> either not found or exited with a non-zero exit status: 127
> Failed to synchronize with agent (it's probably exited)
> How can I fix this problom.
> Thanks a lot!
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)