Re: how to set spark.executor.memory and heap size

2014-07-07 Thread Alex Gaudio
Hi All,


This is a bit late, but I found it helpful.  Piggy-backing on Wang Hao's
comment, spark will ignore the "spark.executor.memory" setting if you add
it to SparkConf via:

conf.set("spark.executor.memory", "1g")


What you actually should do depends on how you run spark.  I found some
"official" documentation for this in a bug report here:

https://issues.apache.org/jira/browse/SPARK-1264



Alex






On Fri, Jun 13, 2014 at 10:40 AM, Hao Wang  wrote:

> Hi, Laurent
>
> You could set Spark.executor.memory and heap size by following methods:
>
> 1. in you conf/spark-env.sh:
> *export SPARK_WORKER_MEMORY=38g*
> *export SPARK_JAVA_OPTS="-XX:-UseGCOverheadLimit
> -XX:+UseConcMarkSweepGC -Xmx2g -XX:MaxPermSize=256m"*
>
> 2. you could also add modification for executor memory and java opts in 
> *spark-submit
> *parameters.
>
> Check the Spark *configure *and *tuning *docs, you could find full
> answers there.
>
>
> Regards,
> Wang Hao(王灏)
>
> CloudTeam | School of Software Engineering
> Shanghai Jiao Tong University
> Address:800 Dongchuan Road, Minhang District, Shanghai, 200240
> Email:wh.s...@gmail.com
>
>
> On Thu, Jun 12, 2014 at 6:29 PM, Laurent T 
> wrote:
>
>> Hi,
>>
>> Can you give us a little more insight on how you used that file to solve
>> your problem ?
>> We're having the same OOM as you were and haven't been able to solve it
>> yet.
>>
>> Thanks
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p7469.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>
>


Re: how to set spark.executor.memory and heap size

2014-06-13 Thread Hao Wang
Hi, Laurent

You could set Spark.executor.memory and heap size by following methods:

1. in you conf/spark-env.sh:
*export SPARK_WORKER_MEMORY=38g*
*export SPARK_JAVA_OPTS="-XX:-UseGCOverheadLimit
-XX:+UseConcMarkSweepGC -Xmx2g -XX:MaxPermSize=256m"*

2. you could also add modification for executor memory and java opts
in *spark-submit
*parameters.

Check the Spark *configure *and *tuning *docs, you could find full answers
there.


Regards,
Wang Hao(王灏)

CloudTeam | School of Software Engineering
Shanghai Jiao Tong University
Address:800 Dongchuan Road, Minhang District, Shanghai, 200240
Email:wh.s...@gmail.com


On Thu, Jun 12, 2014 at 6:29 PM, Laurent T 
wrote:

> Hi,
>
> Can you give us a little more insight on how you used that file to solve
> your problem ?
> We're having the same OOM as you were and haven't been able to solve it
> yet.
>
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p7469.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>


Re: how to set spark.executor.memory and heap size

2014-06-12 Thread Laurent T
Hi,

Can you give us a little more insight on how you used that file to solve
your problem ?
We're having the same OOM as you were and haven't been able to solve it yet.

Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p7469.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-26 Thread wxhsdp
Hi, 
  finally, i solve this problem by using the SPARK_HOME/bin/run-example
script to run my application, and it works. i guess the error is due to lack
of some classpath



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4872.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread YouPeng Yang
Hi
   I am also curious about this question.
   The textFile function was supposed to read a hdfs file?  In this case
,It is on local filesystem that the file was taken in.There are any
recognization ways to identify the local filesystem and the hdfs in the
textFile function?

  Beside, the OOM exeception is really strange. Keeping eyes on this.


2014-04-25 13:10 GMT+08:00 Sean Owen :

> On Fri, Apr 25, 2014 at 2:20 AM, wxhsdp  wrote:
> > 14/04/25 08:38:36 WARN util.NativeCodeLoader: Unable to load
> native-hadoop
> > library for your platform... using builtin-java classes where applicable
> > 14/04/25 08:38:36 WARN snappy.LoadSnappy: Snappy native library not
> loaded
>
> Since this comes up regularly -- these warnings from Hadoop are
> entirely safe to ignore for development and testing.
>


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread Sean Owen
On Fri, Apr 25, 2014 at 2:20 AM, wxhsdp  wrote:
> 14/04/25 08:38:36 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 14/04/25 08:38:36 WARN snappy.LoadSnappy: Snappy native library not loaded

Since this comes up regularly -- these warnings from Hadoop are
entirely safe to ignore for development and testing.


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread wxhsdp
i noticed that error occurs
at
org.apache.hadoop.io.WritableUtils.readCompressedStringArray(WritableUtils.java:183)
 
at
org.apache.hadoop.conf.Configuration.readFields(Configuration.java:2378) 
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285) 
at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77) 
at
org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:39) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

is it related to the warning below?

14/04/25 08:38:36 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
14/04/25 08:38:36 WARN snappy.LoadSnappy: Snappy native library not loaded



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4798.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread wxhsdp
anyone knows the reason? i've googled a bit, and found some guys had the same
problem,  but with no replies...



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4796.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread wxhsdp
it seems that it's nothing about settings, i tried take action, and find it's
ok, but error occurs when i tried count and collect

val a = sc.textFile("any file")
a.take(n).foreach(println) //ok

a.count() //failed
a.collect()//failed


val b = sc.parallelize((Array(1,2,3,4))

b.take(n).foreach(println) //ok

b.count() //ok
b.collect()//ok

it's so weird


Arpit Tak-2 wrote
> Okk fine,
> 
> try like this , i tried and it works..
> specify spark path also in constructor...
> and also
> export SPARK_JAVA_OPTS="-Xms300m -Xmx512m -XX:MaxPermSize=1g"
> 
> import org.apache.spark.SparkContext
> import org.apache.spark.SparkContext._
> object SimpleApp {
>def main(args: Array[String]) {
>   val logFile = "/var/log/auth.log" // read any file.
>   val sc = new SparkContext("spark://localhost:7077", "Simple
> App",
> "/home/ubuntu/spark-0.9.1-incubating/",
>   List("target/scala-2.10/simple-project_2.10-2.0.jar"))
>   val tr = sc.textFile(logFile).cache
>   tr.take(100).foreach(println)
> 
>}
> }
> 
> This will work
> 
> 
> On Thu, Apr 24, 2014 at 3:00 PM, wxhsdp <

> wxhsdp@

> > wrote:
> 
>> hi arpit,
>> on spark shell, i can read local file properly,
>> but when i use sbt run, error occurs.
>> the sbt error message is in the beginning of the thread
>>
>>
>> Arpit Tak-2 wrote
>> > Hi,
>> >
>> > You should be able to read it, file://or file:/// not even required for
>> > reading locally , just path is enough..
>> > what error message you getting on spark-shell while reading...
>> > for local:
>> >
>> >
>> > Also read the same from hdfs file also ...
>> > put your README file there and read , it  works in both ways..
>> > val a= sc.textFile("hdfs://localhost:54310/t/README.md")
>> >
>> > also, print stack message of your spark-shell...
>> >
>> >
>> > On Thu, Apr 24, 2014 at 2:25 PM, wxhsdp <
>>
>> > wxhsdp@
>>
>> > > wrote:
>> >
>> >> thanks for your reply, adnan, i tried
>> >> val logFile = "file:///home/wxhsdp/spark/example/standalone/README.md"
>> >> i think there needs three left slash behind file:
>> >>
>> >> it's just the same as val logFile =
>> >> "home/wxhsdp/spark/example/standalone/README.md"
>> >> the error remains:(
>> >>
>> >>
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4743.html
>> >> Sent from the Apache Spark User List mailing list archive at
>> Nabble.com.
>> >>
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4745.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4752.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread Arpit Tak
Okk fine,

try like this , i tried and it works..
specify spark path also in constructor...
and also
export SPARK_JAVA_OPTS="-Xms300m -Xmx512m -XX:MaxPermSize=1g"

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
object SimpleApp {
   def main(args: Array[String]) {
  val logFile = "/var/log/auth.log" // read any file.
  val sc = new SparkContext("spark://localhost:7077", "Simple App",
"/home/ubuntu/spark-0.9.1-incubating/",
  List("target/scala-2.10/simple-project_2.10-2.0.jar"))
  val tr = sc.textFile(logFile).cache
  tr.take(100).foreach(println)

   }
}

This will work


On Thu, Apr 24, 2014 at 3:00 PM, wxhsdp  wrote:

> hi arpit,
> on spark shell, i can read local file properly,
> but when i use sbt run, error occurs.
> the sbt error message is in the beginning of the thread
>
>
> Arpit Tak-2 wrote
> > Hi,
> >
> > You should be able to read it, file://or file:/// not even required for
> > reading locally , just path is enough..
> > what error message you getting on spark-shell while reading...
> > for local:
> >
> >
> > Also read the same from hdfs file also ...
> > put your README file there and read , it  works in both ways..
> > val a= sc.textFile("hdfs://localhost:54310/t/README.md")
> >
> > also, print stack message of your spark-shell...
> >
> >
> > On Thu, Apr 24, 2014 at 2:25 PM, wxhsdp <
>
> > wxhsdp@
>
> > > wrote:
> >
> >> thanks for your reply, adnan, i tried
> >> val logFile = "file:///home/wxhsdp/spark/example/standalone/README.md"
> >> i think there needs three left slash behind file:
> >>
> >> it's just the same as val logFile =
> >> "home/wxhsdp/spark/example/standalone/README.md"
> >> the error remains:(
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >>
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4743.html
> >> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >>
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4745.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread wxhsdp
hi arpit,
on spark shell, i can read local file properly,
but when i use sbt run, error occurs. 
the sbt error message is in the beginning of the thread


Arpit Tak-2 wrote
> Hi,
> 
> You should be able to read it, file://or file:/// not even required for
> reading locally , just path is enough..
> what error message you getting on spark-shell while reading...
> for local:
> 
> 
> Also read the same from hdfs file also ...
> put your README file there and read , it  works in both ways..
> val a= sc.textFile("hdfs://localhost:54310/t/README.md")
> 
> also, print stack message of your spark-shell...
> 
> 
> On Thu, Apr 24, 2014 at 2:25 PM, wxhsdp <

> wxhsdp@

> > wrote:
> 
>> thanks for your reply, adnan, i tried
>> val logFile = "file:///home/wxhsdp/spark/example/standalone/README.md"
>> i think there needs three left slash behind file:
>>
>> it's just the same as val logFile =
>> "home/wxhsdp/spark/example/standalone/README.md"
>> the error remains:(
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4743.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4745.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread Arpit Tak
Hi,

You should be able to read it, file://or file:/// not even required for
reading locally , just path is enough..
what error message you getting on spark-shell while reading...
for local:


Also read the same from hdfs file also ...
put your README file there and read , it  works in both ways..
val a= sc.textFile("hdfs://localhost:54310/t/README.md")

also, print stack message of your spark-shell...


On Thu, Apr 24, 2014 at 2:25 PM, wxhsdp  wrote:

> thanks for your reply, adnan, i tried
> val logFile = "file:///home/wxhsdp/spark/example/standalone/README.md"
> i think there needs three left slash behind file:
>
> it's just the same as val logFile =
> "home/wxhsdp/spark/example/standalone/README.md"
> the error remains:(
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4743.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread wxhsdp
thanks for your reply, adnan, i tried
val logFile = "file:///home/wxhsdp/spark/example/standalone/README.md"
i think there needs three left slash behind file:

it's just the same as val logFile =
"home/wxhsdp/spark/example/standalone/README.md"
the error remains:(



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4743.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread Adnan Yaqoob
Sorry wrong format:

file:///home/wxhsdp/spark/example/standalone/README.md

An extra / is needed at the start.


On Thu, Apr 24, 2014 at 1:46 PM, Adnan Yaqoob  wrote:

> You need to use proper url format:
>
> file://home/wxhsdp/spark/example/standalone/README.md
>
>
> On Thu, Apr 24, 2014 at 1:29 PM, wxhsdp  wrote:
>
>> i think maybe it's the problem of read local file
>>
>> val logFile = "/home/wxhsdp/spark/example/standalone/README.md"
>> val logData = sc.textFile(logFile).cache()
>>
>> if i replace the above code with
>>
>> val logData = sc.parallelize(Array(1,2,3,4)).cache()
>>
>> the job can complete successfully
>>
>> can't i read a file located at local file system? anyone knows the reason?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4740.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>
>


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread Adnan Yaqoob
You need to use proper url format:

file://home/wxhsdp/spark/example/standalone/README.md


On Thu, Apr 24, 2014 at 1:29 PM, wxhsdp  wrote:

> i think maybe it's the problem of read local file
>
> val logFile = "/home/wxhsdp/spark/example/standalone/README.md"
> val logData = sc.textFile(logFile).cache()
>
> if i replace the above code with
>
> val logData = sc.parallelize(Array(1,2,3,4)).cache()
>
> the job can complete successfully
>
> can't i read a file located at local file system? anyone knows the reason?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4740.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>


Re: how to set spark.executor.memory and heap size

2014-04-24 Thread wxhsdp
i think maybe it's the problem of read local file

val logFile = "/home/wxhsdp/spark/example/standalone/README.md"
val logData = sc.textFile(logFile).cache() 

if i replace the above code with

val logData = sc.parallelize(Array(1,2,3,4)).cache() 

the job can complete successfully

can't i read a file located at local file system? anyone knows the reason?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4740.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: Re: how to set spark.executor.memory and heap size

2014-04-24 Thread wxhsdp
i tried, but no effect


Qin Wei wrote
> try the complete path
> 
> 
> qinwei
>  From: wxhsdpDate: 2014-04-24 14:21To: userSubject: Re: how to set
> spark.executor.memory and heap sizethank you, i add setJars, but nothing
> changes
>  
>     val conf = new SparkConf()
>   .setMaster("spark://127.0.0.1:7077")
>   .setAppName("Simple App")
>   .set("spark.executor.memory", "1g")
>   .setJars(Seq("target/scala-2.10/simple-project_2.10-1.0.jar"))
>     val sc = new SparkContext(conf)
>  
>  
>  
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4732.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4736.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: Re: how to set spark.executor.memory and heap size

2014-04-23 Thread qinwei






try the complete path


qinwei
 From: wxhsdpDate: 2014-04-24 14:21To: userSubject: Re: how to set 
spark.executor.memory and heap sizethank you, i add setJars, but nothing changes
 
    val conf = new SparkConf()
  .setMaster("spark://127.0.0.1:7077")
  .setAppName("Simple App")
  .set("spark.executor.memory", "1g")
  .setJars(Seq("target/scala-2.10/simple-project_2.10-1.0.jar"))
    val sc = new SparkContext(conf)
 
 
 
--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4732.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.



Re: how to set spark.executor.memory and heap size

2014-04-23 Thread wxhsdp
thank you, i add setJars, but nothing changes

val conf = new SparkConf()
  .setMaster("spark://127.0.0.1:7077")
  .setAppName("Simple App")
  .set("spark.executor.memory", "1g")
  .setJars(Seq("target/scala-2.10/simple-project_2.10-1.0.jar"))
val sc = new SparkContext(conf)



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4732.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: how to set spark.executor.memory and heap size

2014-04-23 Thread Adnan Yaqoob
When I was testing spark, I faced this issue, this issue is not related to
memory shortage, It is because your configurations are not correct. Try to
pass you current Jar to to the SparkContext with SparkConf's setJars
function and try again.

On Thu, Apr 24, 2014 at 8:38 AM, wxhsdp  wrote:

> by the way, codes run ok in spark shell
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4720.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>


Re: how to set spark.executor.memory and heap size

2014-04-23 Thread wxhsdp
by the way, codes run ok in spark shell



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-set-spark-executor-memory-and-heap-size-tp4719p4720.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


how to set spark.executor.memory and heap size

2014-04-23 Thread wxhsdp
hi
i'am testing SimpleApp.scala in standalone mode with only one pc, so i have
one master and one local worker on the same pc

with rather small input file size(4.5K), i have got the
java.lang.OutOfMemoryError: Java heap space error

here's my settings:
spark-env.sh:
export SPARK_MASTER_IP="127.0.0.1"
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=2g
export SPARK_JAVA_OPTS+=" -Xms512m -Xmx512m " //(1)

SimpleApp.scala:
val conf = new SparkConf()
  .setMaster("spark://127.0.0.1:7077")
  .setAppName("Simple App")
  .set("spark.executor.memory", "1g")  //(2)
val sc = new SparkContext(conf)

sbt:
SBT_OPTS="-Xms512M -Xmx512M" //(3)
java $SBT_OPTS -jar `dirname $0`/sbt-launch.jar "$@"

i'am confused with the above (1)(2)(3) settings, and tried several different
options, but all failed
with java.lang.OutOfMemoryError:(

what's the difference between JVM heap size and spark.executor.memory and
how to set them?

i've read some docs and still cannot fully understand

spark.executor.memory: Amount of memory to use per executor process, in the
same format as JVM memory strings (e.g. 512m, 2g).

spark.storage.memoryFraction: Fraction of Java heap to use for Spark's
memory cache.

spark.storage.memoryFraction = 0.6 * spark.executor.memory

is that mean spark.executor.memory = JVM heap size?

here's the logs:
[info] Running SimpleApp 
14/04/24 10:59:41 WARN util.Utils: Your hostname, ubuntu resolves to a
loopback address: 127.0.1.1; using 192.168.0.113 instead (on interface eth0)
14/04/24 10:59:41 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to
another address
14/04/24 10:59:42 INFO slf4j.Slf4jLogger: Slf4jLogger started
14/04/24 10:59:42 INFO Remoting: Starting remoting
14/04/24 10:59:42 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://spark@ubuntu.local:46864]
14/04/24 10:59:42 INFO Remoting: Remoting now listens on addresses:
[akka.tcp://spark@ubuntu.local:46864]
14/04/24 10:59:42 INFO spark.SparkEnv: Registering BlockManagerMaster
14/04/24 10:59:42 INFO storage.DiskBlockManager: Created local directory at
/tmp/spark-local-20140424105942-362c
14/04/24 10:59:42 INFO storage.MemoryStore: MemoryStore started with
capacity 297.0 MB.
14/04/24 10:59:42 INFO network.ConnectionManager: Bound socket to port 34146
with id = ConnectionManagerId(ubuntu.local,34146)
14/04/24 10:59:42 INFO storage.BlockManagerMaster: Trying to register
BlockManager
14/04/24 10:59:42 INFO storage.BlockManagerMasterActor$BlockManagerInfo:
Registering block manager ubuntu.local:34146 with 297.0 MB RAM
14/04/24 10:59:42 INFO storage.BlockManagerMaster: Registered BlockManager
14/04/24 10:59:43 INFO spark.HttpServer: Starting HTTP Server
14/04/24 10:59:43 INFO server.Server: jetty-7.6.8.v20121106
14/04/24 10:59:43 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:58936
14/04/24 10:59:43 INFO broadcast.HttpBroadcast: Broadcast server started at
http://192.168.0.113:58936
14/04/24 10:59:43 INFO spark.SparkEnv: Registering MapOutputTracker
14/04/24 10:59:43 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-ce78fc2c-097d-4053-991d-b6bf140d6c33
14/04/24 10:59:43 INFO spark.HttpServer: Starting HTTP Server
14/04/24 10:59:43 INFO server.Server: jetty-7.6.8.v20121106
14/04/24 10:59:43 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:56414
14/04/24 10:59:43 INFO server.Server: jetty-7.6.8.v20121106
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/storage/rdd,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/storage,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages/stage,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages/pool,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/stages,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/environment,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/executors,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/metrics/json,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/static,null}
14/04/24 10:59:43 INFO handler.ContextHandler: started
o.e.j.s.h.ContextHandler{/,null}
14/04/24 10:59:43 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
14/04/24 10:59:43 INFO ui.SparkUI: Started Spark Web UI at
http://ubuntu.local:4040
14/04/24 10:59:43 INFO client.AppClient$ClientActor: Connecting to master
spark://127.0.0.1:7077...
14/04/24 10:59:44 INFO cluster.SparkDeploySchedulerBackend: Connected to
Spark cluster with app ID app-20140424105944-0001
14/04/24 10:59:44 INFO client.AppClient$ClientActor: Executor added:
app-20140424105944-0001/0 on worker-20140424105022-ubuntu.local-40058
(ubuntu.local:40058) with 1 cores
14/04/24 10:59:44 INFO cluster.SparkDeploySchedulerBac