Re: High virtual memory consumption on spark-submit client.

2016-05-13 Thread jone
no, i have set master to yarn-cluster.
when the sparkpi.running,the result of  free -t as follow
[running]mqq@10.205.3.29:/data/home/hive/conf$ free -t
 total   used   free shared    buffers cached
Mem:  32740732   32105684 635048  0 683332   28863456
-/+ buffers/cache:    2558896   30181836
Swap:  2088952  60320    2028632
Total:    34829684   32166004    2663680
after sparkpi succes,the result as follow
[running]mqq@10.205.3.29:/data/home/hive/conf$ free -t
 total   used   free shared    buffers cached
Mem:  32740732   31614452    1126280  0 683624   28863096
-/+ buffers/cache:    2067732   30673000
Swap:  2088952  60320    2028632
Total:    34829684   31674772    3154912
Mich Talebzadeh 
于 2016年5月13日,14:47写道:Is this a standalone set up single host where executor runs inside the driver?also runfree -tTo see the virtual memory usage which is basically swap spacefree -t total   used   free shared    buffers cachedMem:  24546308   24268760 277548  0    1088236   15168668-/+ buffers/cache:    8011856   16534452Swap:  2031608    304    2031304Total:    26577916   24269064    2308852

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com

 


On 13 May 2016 at 07:36, Jone Zhang  wrote:mich, Do you want this
==
[running]mqq@10.205.3.29:/data/home/hive/conf$ ps aux | grep SparkPi
mqq      20070  3.6  0.8 10445048 267028 pts/16 Sl+ 13:09   0:11
/data/home/jdk/bin/java
-Dlog4j.configuration=file:///data/home/spark/conf/log4j.properties
-cp /data/home/spark/lib/*:/data/home/hadoop/share/hadoop/common/*:/data/home/hadoop/share/hadoop/common/lib/*:/data/home/hadoop/share/hadoop/yarn/*:/data/home/hadoop/share/hadoop/yarn/lib/*:/data/home/hadoop/share/hadoop/hdfs/*:/data/home/hadoop/share/hadoop/hdfs/lib/*:/data/home/hadoop/share/hadoop/tools/*:/data/home/hadoop/share/hadoop/mapreduce/*:/data/home/spark/conf/:/data/home/spark/lib/spark-assembly-1.4.1-hadoop2.5.1_150903.jar:/data/home/spark/lib/datanucleus-api-jdo-3.2.6.jar:/data/home/spark/lib/datanucleus-core-3.2.10.jar:/data/home/spark/lib/datanucleus-rdbms-3.2.9.jar:/data/home/hadoop/conf/:/data/home/hadoop/conf/:/data/home/spark/lib/*:/data/home/hadoop/share/hadoop/common/*:/data/home/hadoop/share/hadoop/common/lib/*:/data/home/hadoop/share/hadoop/yarn/*:/data/home/hadoop/share/hadoop/yarn/lib/*:/data/home/hadoop/share/hadoop/hdfs/*:/data/home/hadoop/share/hadoop/hdfs/lib/*:/data/home/hadoop/share/hadoop/tools/*:/data/home/hadoop/share/hadoop/mapreduce/*
-XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --master
yarn-cluster --class org.apache.spark.examples.SparkPi --queue spark
--num-executors 4
/data/home/spark/lib/spark-examples-1.4.1-hadoop2.5.1.jar 1
mqq      22410  0.0  0.0 110600  1004 pts/8    S+   13:14   0:00 grep SparkPi
[running]mqq@10.205.3.29:/data/home/hive/conf$ top -p 20070

top - 13:14:48 up 504 days, 19:17, 19 users,  load average: 1.41, 1.10, 0.99
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
Cpu(s): 18.1%us,  2.7%sy,  0.0%ni, 74.4%id,  4.5%wa,  0.0%hi,  0.2%si,  0.0%st
Mem:  32740732k total, 31606288k used,  113k free,   475908k buffers
Swap:  2088952k total,    61076k used,  2027876k free, 27594452k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
20070 mqq       20   0 10.0g 260m  32m S  0.0  0.8   0:11.38 java
==

Harsh, physical cpu cores is 1, virtual cpu cores is 4

Thanks.

2016-05-13 13:08 GMT+08:00, Harsh J :
> How many CPU cores are on that machine? Read http://qr.ae/8Uv3Xq
>
> You can also confirm the above by running the pmap utility on your process
> and most of the virtual memory would be under 'anon'.
>
> On Fri, 13 May 2016 09:11 jone,  wrote:
>
>> The virtual memory is 9G When i run org.apache.spark.examples.SparkPi
>> under yarn-cluster model,which using default configurations.
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
>> COMMAND
>>
>> 4519 mqq       20   0 9041 <2009041>m 248m  26m S  0.3  0.8   0:19.85
>> java
>>  I am curious why is so high?
>>
>> Thanks.
>>
>

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




Re: High virtual memory consumption on spark-submit client.

2016-05-12 Thread Harsh J
How many CPU cores are on that machine? Read http://qr.ae/8Uv3Xq

You can also confirm the above by running the pmap utility on your process
and most of the virtual memory would be under 'anon'.

On Fri, 13 May 2016 09:11 jone,  wrote:

> The virtual memory is 9G When i run org.apache.spark.examples.SparkPi
> under yarn-cluster model,which using default configurations.
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
> COMMAND
>
> 4519 mqq   20   0 9041 <2009041>m 248m  26m S  0.3  0.8   0:19.85
> java
>  I am curious why is so high?
>
> Thanks.
>


Re: High virtual memory consumption on spark-submit client.

2016-05-12 Thread Mich Talebzadeh
can you please do the following:

jps|grep SparkSubmit|

and send the output of

ps aux|grep pid
top -p PID

and the output of

free

HTH



Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com



On 13 May 2016 at 04:40, jone  wrote:

> The virtual memory is 9G When i run org.apache.spark.examples.SparkPi
> under yarn-cluster model,which using default configurations.
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
> COMMAND
>
> 4519 mqq   20   0 9041 <2009041>m 248m  26m S  0.3  0.8   0:19.85
> java
>  I am curious why is so high?
>
> Thanks.
>


High virtual memory consumption on spark-submit client.

2016-05-12 Thread jone
The virtual memory is 9G When i run org.apache.spark.examples.SparkPi under yarn-cluster model,which using default configurations.
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
 4519 mqq   20   0 9041m 248m  26m S  0.3  0.8   0:19.85 java  
 I am curious why is so high?
Thanks.