[
https://issues.apache.org/jira/browse/SPARK-13192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-13192.
-------------------------------
Resolution: Invalid
Questions go to user@
> Some memory leak problem?
> -------------------------
>
> Key: SPARK-13192
> URL: https://issues.apache.org/jira/browse/SPARK-13192
> Project: Spark
> Issue Type: Bug
> Components: Streaming
> Affects Versions: 1.6.0
> Reporter: uncleGen
>
> In my Spark Streaming job, the executor container was killed by Yarn. The
> nodemanager log showed that container memory was constantly increaming, and
> exceeded the container memory limit. Specifically, only the container which
> contains receiver occurred this problem. Is there any memory leak issue here?
> Or is there some known memory leak issue?
> My code snippet:
> {code}
> dStream.foreachRDD(new Function<JavaRDD<T>, Void>() {
> @Override
> public Void call(JavaRDD<T> rdd) throws Exception {
> T card = rdd.first();
> String time =
> DateUtil.formatToDay(DateUtil.strLong(card.getTime()));
> System.out.println("time:"+time+", count"+rdd.count)
> return null;
> }
> });
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]