Hi All,
This is a bit late, but I found it helpful. Piggy-backing on Wang Hao's
comment, spark will ignore the "spark.executor.memory" setting if you add
it to SparkConf via:
conf.set("spark.executor.memory", "1g")
What you actually should do depends on how you run spark. I found some
"offic
Hi, Laurent
You could set Spark.executor.memory and heap size by following methods:
1. in you conf/spark-env.sh:
*export SPARK_WORKER_MEMORY=38g*
*export SPARK_JAVA_OPTS="-XX:-UseGCOverheadLimit
-XX:+UseConcMarkSweepGC -Xmx2g -XX:MaxPermSize=256m"*
2. you could also add modification for
Hi,
Can you give us a little more insight on how you used that file to solve
your problem ?
We're having the same OOM as you were and haven't been able to solve it yet.
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory
Hi,
finally, i solve this problem by using the SPARK_HOME/bin/run-example
script to run my application, and it works. i guess the error is due to lack
of some classpath
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-h
Hi
I am also curious about this question.
The textFile function was supposed to read a hdfs file? In this case
,It is on local filesystem that the file was taken in.There are any
recognization ways to identify the local filesystem and the hdfs in the
textFile function?
Beside, the OOM exe
On Fri, Apr 25, 2014 at 2:20 AM, wxhsdp wrote:
> 14/04/25 08:38:36 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 14/04/25 08:38:36 WARN snappy.LoadSnappy: Snappy native library not loaded
Since this comes up r
i noticed that error occurs
at
org.apache.hadoop.io.WritableUtils.readCompressedStringArray(WritableUtils.java:183)
at
org.apache.hadoop.conf.Configuration.readFields(Configuration.java:2378)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:28
anyone knows the reason? i've googled a bit, and found some guys had the same
problem, but with no replies...
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4796.html
Sent from the Apache Spark User Lis
it seems that it's nothing about settings, i tried take action, and find it's
ok, but error occurs when i tried count and collect
val a = sc.textFile("any file")
a.take(n).foreach(println) //ok
a.count() //failed
a.collect()//failed
val b = sc.parallelize((Array(1,2,3,4))
b.take(n).foreach(pri
Okk fine,
try like this , i tried and it works..
specify spark path also in constructor...
and also
export SPARK_JAVA_OPTS="-Xms300m -Xmx512m -XX:MaxPermSize=1g"
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
object SimpleApp {
def main(args: Array[Stri
hi arpit,
on spark shell, i can read local file properly,
but when i use sbt run, error occurs.
the sbt error message is in the beginning of the thread
Arpit Tak-2 wrote
> Hi,
>
> You should be able to read it, file://or file:/// not even required for
> reading locally , just path is enough..
>
Hi,
You should be able to read it, file://or file:/// not even required for
reading locally , just path is enough..
what error message you getting on spark-shell while reading...
for local:
Also read the same from hdfs file also ...
put your README file there and read , it works in both ways..
thanks for your reply, adnan, i tried
val logFile = "file:///home/wxhsdp/spark/example/standalone/README.md"
i think there needs three left slash behind file:
it's just the same as val logFile =
"home/wxhsdp/spark/example/standalone/README.md"
the error remains:(
--
View this message in context
Sorry wrong format:
file:///home/wxhsdp/spark/example/standalone/README.md
An extra / is needed at the start.
On Thu, Apr 24, 2014 at 1:46 PM, Adnan Yaqoob wrote:
> You need to use proper url format:
>
> file://home/wxhsdp/spark/example/standalone/README.md
>
>
> On Thu, Apr 24, 2014 at 1:29
You need to use proper url format:
file://home/wxhsdp/spark/example/standalone/README.md
On Thu, Apr 24, 2014 at 1:29 PM, wxhsdp wrote:
> i think maybe it's the problem of read local file
>
> val logFile = "/home/wxhsdp/spark/example/standalone/README.md"
> val logData = sc.textFile(logFile).c
i think maybe it's the problem of read local file
val logFile = "/home/wxhsdp/spark/example/standalone/README.md"
val logData = sc.textFile(logFile).cache()
if i replace the above code with
val logData = sc.parallelize(Array(1,2,3,4)).cache()
the job can complete successfully
can't i read a
i tried, but no effect
Qin Wei wrote
> try the complete path
>
>
> qinwei
> From: wxhsdpDate: 2014-04-24 14:21To: userSubject: Re: how to set
> spark.executor.memory and heap sizethank you, i add setJars, but nothing
> changes
>
> val conf = new SparkConf()
> .setMaster("spark://12
try the complete path
qinwei
From: wxhsdpDate: 2014-04-24 14:21To: userSubject: Re: how to set
spark.executor.memory and heap sizethank you, i add setJars, but nothing changes
val conf = new SparkConf()
.setMaster("spark://127.0.0.1:7077")
.setAppName("Simple App")
thank you, i add setJars, but nothing changes
val conf = new SparkConf()
.setMaster("spark://127.0.0.1:7077")
.setAppName("Simple App")
.set("spark.executor.memory", "1g")
.setJars(Seq("target/scala-2.10/simple-project_2.10-1.0.jar"))
val sc = new SparkContext(conf)
When I was testing spark, I faced this issue, this issue is not related to
memory shortage, It is because your configurations are not correct. Try to
pass you current Jar to to the SparkContext with SparkConf's setJars
function and try again.
On Thu, Apr 24, 2014 at 8:38 AM, wxhsdp wrote:
> by t
by the way, codes run ok in spark shell
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4720.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
hi
i'am testing SimpleApp.scala in standalone mode with only one pc, so i have
one master and one local worker on the same pc
with rather small input file size(4.5K), i have got the
java.lang.OutOfMemoryError: Java heap space error
here's my settings:
spark-env.sh:
export SPARK_MASTER_IP="127.0.0
22 matches
Mail list logo