[ 
https://issues.apache.org/jira/browse/SPARK-20809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16020744#comment-16020744
 ] 

James Porritt commented on SPARK-20809:
---------------------------------------

Many thanks, this put me on track for the solution. I needed to put 
--driver-memory=16g rather than set it in the code.

I'd done some tests on the sentence generator and worked out how to get it to 
give me a 25K string, which multiplied by 50,000 is about 1.2G.

> PySpark: Java heap space issue despite apparently being within memory limits
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-20809
>                 URL: https://issues.apache.org/jira/browse/SPARK-20809
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.1.1
>         Environment: Linux x86_64
>            Reporter: James Porritt
>
> I have the following script:
> {code}
> import itertools
> import loremipsum
> from pyspark import SparkContext, SparkConf
> from pyspark.sql import SparkSession
> conf = SparkConf().set("spark.cores.max", "16") \
>     .set("spark.driver.memory", "16g") \
>     .set("spark.executor.memory", "16g") \
>     .set("spark.executor.memory_overhead", "16g") \
>     .set("spark.driver.maxResultsSize", "0")
> sc = SparkContext(appName="testRDD", conf=conf)
> ss = SparkSession(sc)
> j = itertools.cycle(range(8))
> rows = [(i, j.next(), ' '.join(map(lambda x: x[2], 
> loremipsum.generate_sentences(600)))) for i in range(500)] * 100
> rrd = sc.parallelize(rows, 128)
> {code}
> When I run it with:
> {noformat}
> <system path>/spark-2.1.1-bin-hadoop2.7/bin/spark-submit <home 
> directory>/writeTest.py
> {noformat}
> it fails with a 'Java heap space' error:
> {noformat}
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> z:org.apache.spark.api.python.PythonRDD.readRDDFromFile.
> : java.lang.OutOfMemoryError: Java heap space
>         at 
> org.apache.spark.api.python.PythonRDD$.readRDDFromFile(PythonRDD.scala:468)
>         at 
> org.apache.spark.api.python.PythonRDD.readRDDFromFile(PythonRDD.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>         at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>         at py4j.Gateway.invoke(Gateway.java:280)
>         at 
> py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>         at py4j.commands.CallCommand.execute(CallCommand.java:79)
>         at py4j.GatewayConnection.run(GatewayConnection.java:214)
>         at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The data I create here approximates my actual data. The third element of each 
> tuple should be around 25k, and there are 50k tuples overall. I estimate that 
> I should have around 1.2G of data. 
> Why then does it fail? All parts of the system should have enough memory?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to