Hello!

There is no Kerberos, no security in my cluster. It's in an internal
network.

Interpreters %hive and %sh work, I can create tables, drop, pwd, etc. So,
the problem is in integration with Spark.

In /usr/spark/conf/spark-env.sh I set / unset in turn MASTER =
spark://localhost:7077,  MASTER = spark://192.168.58.10:7077, MASTER =
spark://127.0.0.1:7077 on master node. On slaves I set / unset in turn
MASTER = spark://192.168.58.10:7077 in different combinations.

Zeppelin is installed on the same machine as Spark Master. So, in
zeppelin-env.sh I set / unset MASTER = spark://localhost:7077,  MASTER =
spark://192.168.58.10:7077, MASTER = spark://127.0.0.1:7077
Yes, I can connect to 192.168.58 and see URL spark://192.168.58:7077
REST URL spark://192.168.58:6066 (cluster mode)

Does TCP type influence? On my laptop, in pseudodistributed mode, all
connections are IPv4 (tcp). There are IPv4 lines in /etc/hosts only.
In cluster, Spark automatically, for unknown reasons, uses IPv6 (tcp6).
There are IPv6 lines in /etc/hosts.
Right now, I try to make Spark use IPv4

I switched Spark to IPv4 via -Djava.net.preferIPv4Stack=true

It seems that Zeppelin uses / answers the following ports on Master server
(ps axu | grep zeppelin;  then for each PID netstat -natp | grep ...):
41303
46971
59007
35781
53637
34860
59793
46971
50676
50677

44341
50805
50803
50802

60886
43345
48415
48417
10000
48416

Best regards

P.S. I inserted into zeppelin-env.sh and spark interpreter configuration in
web UI precise address from Spark page: MASTER=spark://192.168.58.10:7077.

Earlier, I got Java error stacktrace in Web UI.  I BEGAN to receive
"Scheduler already terminated"

On Tue, Nov 24, 2015 at 12:56 PM, moon soo Lee <m...@apache.org> wrote:

> Thanks for sharing the problem.
>
> Based on your log file, it looks like somehow your spark master address is
> not well configured.
>
> Can you confirm that you have also set 'master' property in Interpreter
> menu on GUI, at spark section?
>
> If it is not, you can connect Spark Master UI with your web browser and
> see the first line, "Spark Master at spark://....". That value should be in
> 'master' property in Interpreter menu on GUI, at spark section.
>
> Hope this helps
>
> Best,
> moon
>
> On Tue, Nov 24, 2015 at 3:07 AM Timur Shenkao <t...@timshenkao.su> wrote:
>
>> Hi!
>>
>> New mistake comes: TTransportException.
>> I use CentOS 6.7 + Spark 1.5.2 Standalone + Cloudera Hadoop 5.4.8 on the
>> same cluster. I can't use Mesos or Spark on YARN.
>> I built Zeppelin 0.6.0 so:
>> mvn clean package  –DskipTests  -Pspark-1.5 -Phadoop-2.6 -Pyarn -Ppyspark
>> -Pbuild-distr
>>
>> I constantly get errors like
>> ERROR [2015-11-23 18:14:33,404] ({pool-1-thread-4} Job.java[run]:183) -
>> Job failed
>> org.apache.zeppelin.interpreter.InterpreterException:
>> org.apache.thrift.transport.TTransportException
>>     at
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:237)
>>
>>
>> or
>>
>> ERROR [2015-11-23 18:07:26,535] ({Thread-11}
>> RemoteInterpreterEventPoller.java[run]:72) - Can't get
>> RemoteInterpreterEvent
>> org.apache.thrift.transport.TTransportException
>>
>> I changed several parameters in zeppelin-env.sh and in Spark configs.
>> Whatever I do - these mistakes come. At the same time, when I use local
>> Zeppelin with Hadoop in pseudodistributed mode + Spark Standalone (Master +
>> workers on the same machine), everything works.
>>
>> What configurations (memory, network, CPU cores) should be in order to
>> Zeppelin to work?
>>
>> I launch H2O on this cluster. And it works.
>> Spark Master config:
>> SPARK_MASTER_WEBUI_PORT=18080
>> HADOOP_CONF_DIR=/etc/hadoop/conf
>> SPARK_HOME=/usr/spark
>>
>> Spark Worker config:
>>    export HADOOP_CONF_DIR=/etc/hadoop/conf
>>    export MASTER=spark://192.168.58.10:7077
>>    export SPARK_HOME=/usr/spark
>>
>>    SPARK_WORKER_INSTANCES=1
>>    SPARK_WORKER_CORES=4
>>    SPARK_WORKER_MEMORY=32G
>>
>>
>> I apply Spark configs + zeppelin configs & logs for local mode   +
>> zeppelin configs & logs when I defined IP address of Spark Master
>> explicitly.
>> Thank you.
>>
>

Reply via email to