It works, thanks very much
Zhanfeng Huo
From: Yanbo Liang
Date: 2014-10-28 18:50
To: Zhanfeng Huo
CC: user
Subject: Re: SparkSql OutOfMemoryError
Try to increase the driver memory.
2014-10-28 17:33 GMT+08:00 Zhanfeng Huo :
Hi,friends:
I use spark(spark 1.1) sql operate data in hive-0.12
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/10/28 14:42:55 ERROR ActorSystemImpl: Uncaught fatal error from thread
[sparkDriver-akka.actor.default-dispatcher-36] shutting down ActorSystem
[sparkDriver]
java.lang.OutOfMemoryError: Java heap space
Zhanfeng Huo
Thank you very much.
It is helpful for end users.
Zhanfeng Huo
From: Patrick Wendell
Date: 2014-09-15 10:19
To: Zhanfeng Huo
CC: user
Subject: Re: spark-1.1.0 with make-distribution.sh problem
Yeah that issue has been fixed by adding better docs, it just didn't make it in
time fo
resolved:
./make-distribution.sh --name spark-hadoop-2.3.0 --tgz --with-tachyon -Pyarn
-Phadoop-2.3 -Dhadoop.version=2.3.0 -Phive -DskipTests
This code is a bit misleading
Zhanfeng Huo
From: Zhanfeng Huo
Date: 2014-09-12 14:13
To: user
Subject: spark-1.1.0 with make-distribution.sh
0
++ mvn help:evaluate -Dexpression=hadoop.version -Pyarn -Phive --skip-java-test
--with-tachyon --tgz -Pyarn.version=2.3.0 -Phadoop.version=2.3.0
++ grep -v INFO
++ tail -n 1
+ SPARK_HADOOP_VERSION=' -X,--debug Produce execution debug output'
Best Regards
Zhanfeng Huo
Thanks for your help.
It works after setting SPARK_HISTORY_OPTS.
Zhanfeng Huo
From: Andrew Or
Date: 2014-09-04 07:52
To: Marcelo Vanzin
CC: Zhanfeng Huo; user
Subject: Re: How can I start history-server with kerberos HDFS ?
Hi Zhanfeng,
You will need to set these through SPARK_HISTORY_OPTS
ion: No valid credentials provided (Mechanism level: Failed
to find any Kerberos tgt)]; Host Details :
#history-server
spark.history.kerberos.enabled true
park.history.kerberos.principal test/spark@test
spark.history.kerberos.keytab /home/test/test_spark.keytab
spark.eventLog.enabled true
Zhanfeng Huo
Thank you.
Zhanfeng Huo
From: Andrew Or
Date: 2014-09-02 08:21
To: Zhanfeng Huo
CC: user
Subject: Re: Can value in spark-defaults.conf support system variables?
No, not currently.
2014-09-01 2:53 GMT-07:00 Zhanfeng Huo :
Hi,all:
Can value in spark-defaults.conf support system variables
Hi,all:
Can value in spark-defaults.conf support system variables?
Such as "mess = ${user.home}/${user.name}".
Best Regards
Zhanfeng Huo
((key, value) <- sysProps) {
System.setProperty(key, value)
}
Best Regards
Zhanfeng Huo
From: Akhil Das
Date: 2014-08-21 14:36
To: Darin McBeath
CC: Spark User Group
Subject: Re: How to pass env variables from master to executors within
spark-shell
One approach would be to set these environmen
That helps a lot.
Thanks.
Zhanfeng Huo
From: Davies Liu
Date: 2014-08-18 14:31
To: ryaminal
CC: u...@spark.incubator.apache.org
Subject: Re: application as a service
Another option is using Tachyon to cache the RDD, then the cache can
be shared by different applications. See how to use
Thank you Eugen Cepoi, I will try it now.
Zhanfeng Huo
From: Eugen Cepoi
Date: 2014-08-17 23:34
To: Zhanfeng Huo
CC: user
Subject: Re: application as a service
Hi,
You can achieve it by running a spray service for example that has access to
the RDD in question. When starting the app you
p can access it's rdd.
How can I achieve this require ?
Thanks.
Zhanfeng Huo
Thank you, Ton. That helps a lot.
I want to debug spark code for tracing state transform. So I use sbt as my
build tools and compile spark code in Intellij IDEA .
Zhanfeng Huo
From: Ron's Yahoo!
Date: 2014-08-12 03:46
To: Zhanfeng Huo
CC: user
Subject: Re: Compile spark code with
does`t
take effect, can you help me ?
Zhanfeng Huo
15 matches
Mail list logo