[jira] [Commented] (SPARK-15531) spark-class tries to use too much memory when running Launcher

2017-02-13 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863628#comment-15863628
 ] 

Apache Spark commented on SPARK-15531:
--

User 'Pashugan' has created a pull request for this issue:
https://github.com/apache/spark/pull/16913

> spark-class tries to use too much memory when running Launcher
> --
>
> Key: SPARK-15531
> URL: https://issues.apache.org/jira/browse/SPARK-15531
> Project: Spark
>  Issue Type: Bug
>  Components: Deploy
>Affects Versions: 1.6.1, 2.0.0
> Environment: Linux running in Univa or Sun Grid Engine
>Reporter: mathieu longtin
>Assignee: Sean Owen
>Priority: Minor
>  Labels: launcher
> Fix For: 2.0.0
>
>
> When running Java on a server with a lot of memory but a rather small virtual 
> memory ulimit, Java will try to allocate a large memory pool and fail:
> {code}
> # System has 128GB of Ram but ulimit set to 7.5G
> $ ulimit -v
> 7812500
> $ java -client
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This is a known issue with Java, but unlikely to get fixed.
> As a result, when starting various Spark process (spark-submit, master or 
> workers), they fail when {{spark-class}} tries to run 
> {{org.apache.spark.launcher.Main}}.
> To fix this, add {{-Xmx128m}} to this line
> {code}
> "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main 
> "$\@"
> {code}
> (https://github.com/apache/spark/blob/master/bin/spark-class#L71)
> We've been using 128m and that works in our setup. Considering all the 
> launcher does is analyze the arguments and env var and spit out some command, 
> it should be plenty. All other calls to Java seem to include some value for 
> -Xmx, so it is not an issue elsewhere.
> I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m 
> (bigger, smaller, configurable, ...), so I'd rather it would be discussed 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15531) spark-class tries to use too much memory when running Launcher

2016-05-27 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304054#comment-15304054
 ] 

Apache Spark commented on SPARK-15531:
--

User 'srowen' has created a pull request for this issue:
https://github.com/apache/spark/pull/13360

> spark-class tries to use too much memory when running Launcher
> --
>
> Key: SPARK-15531
> URL: https://issues.apache.org/jira/browse/SPARK-15531
> Project: Spark
>  Issue Type: Bug
>  Components: Deploy
>Affects Versions: 1.6.1, 2.0.0
> Environment: Linux running in Univa or Sun Grid Engine
>Reporter: mathieu longtin
>Priority: Minor
>  Labels: launcher
>
> When running Java on a server with a lot of memory but a rather small virtual 
> memory ulimit, Java will try to allocate a large memory pool and fail:
> {code}
> # System has 128GB of Ram but ulimit set to 7.5G
> $ ulimit -v
> 7812500
> $ java -client
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This is a known issue with Java, but unlikely to get fixed.
> As a result, when starting various Spark process (spark-submit, master or 
> workers), they fail when {{spark-class}} tries to run 
> {{org.apache.spark.launcher.Main}}.
> To fix this, add {{-Xmx128m}} to this line
> {code}
> "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main 
> "$\@"
> {code}
> (https://github.com/apache/spark/blob/master/bin/spark-class#L71)
> We've been using 128m and that works in our setup. Considering all the 
> launcher does is analyze the arguments and env var and spit out some command, 
> it should be plenty. All other calls to Java seem to include some value for 
> -Xmx, so it is not an issue elsewhere.
> I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m 
> (bigger, smaller, configurable, ...), so I'd rather it would be discussed 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15531) spark-class tries to use too much memory when running Launcher

2016-05-25 Thread mathieu longtin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300986#comment-15300986
 ] 

mathieu longtin commented on SPARK-15531:
-

Correct, on a 128G server, just running {{java}} with no argument will try to 
allocate 32G, regardless of ulimit. It's "expected behavior" according to 
Oracle.

> spark-class tries to use too much memory when running Launcher
> --
>
> Key: SPARK-15531
> URL: https://issues.apache.org/jira/browse/SPARK-15531
> Project: Spark
>  Issue Type: Bug
>  Components: Deploy
>Affects Versions: 1.6.1, 2.0.0
> Environment: Linux running in Univa or Sun Grid Engine
>Reporter: mathieu longtin
>Priority: Minor
>  Labels: launcher
>
> When running Java on a server with a lot of memory but a rather small virtual 
> memory ulimit, Java will try to allocate a large memory pool and fail:
> {code}
> # System has 128GB of Ram but ulimit set to 7.5G
> $ ulimit -v
> 7812500
> $ java -client
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This is a known issue with Java, but unlikely to get fixed.
> As a result, when starting various Spark process (spark-submit, master or 
> workers), they fail when {{spark-class}} tries to run 
> {{org.apache.spark.launcher.Main}}.
> To fix this, add {{-Xmx128m}} to this line
> {code}
> "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main 
> "$\@"
> {code}
> (https://github.com/apache/spark/blob/master/bin/spark-class#L71)
> We've been using 128m and that works in our setup. Considering all the 
> launcher does is analyze the arguments and env var and spit out some command, 
> it should be plenty. All other calls to Java seem to include some value for 
> -Xmx, so it is not an issue elsewhere.
> I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m 
> (bigger, smaller, configurable, ...), so I'd rather it would be discussed 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15531) spark-class tries to use too much memory when running Launcher

2016-05-25 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300950#comment-15300950
 ] 

Marcelo Vanzin commented on SPARK-15531:


No, Mathieu described the launcher correctly. Adding the -Xmx should be fine, I 
just find it super weird that java is failing without it, since it's not like 
the ulimit is low (7.5G according to the description).

> spark-class tries to use too much memory when running Launcher
> --
>
> Key: SPARK-15531
> URL: https://issues.apache.org/jira/browse/SPARK-15531
> Project: Spark
>  Issue Type: Bug
>  Components: Deploy
>Affects Versions: 1.6.1, 2.0.0
> Environment: Linux running in Univa or Sun Grid Engine
>Reporter: mathieu longtin
>Priority: Minor
>  Labels: launcher
>
> When running Java on a server with a lot of memory but a rather small virtual 
> memory ulimit, Java will try to allocate a large memory pool and fail:
> {code}
> # System has 128GB of Ram but ulimit set to 7.5G
> $ ulimit -v
> 7812500
> $ java -client
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This is a known issue with Java, but unlikely to get fixed.
> As a result, when starting various Spark process (spark-submit, master or 
> workers), they fail when {{spark-class}} tries to run 
> {{org.apache.spark.launcher.Main}}.
> To fix this, add {{-Xmx128m}} to this line
> {code}
> "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main 
> "$\@"
> {code}
> (https://github.com/apache/spark/blob/master/bin/spark-class#L71)
> We've been using 128m and that works in our setup. Considering all the 
> launcher does is analyze the arguments and env var and spit out some command, 
> it should be plenty. All other calls to Java seem to include some value for 
> -Xmx, so it is not an issue elsewhere.
> I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m 
> (bigger, smaller, configurable, ...), so I'd rather it would be discussed 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15531) spark-class tries to use too much memory when running Launcher

2016-05-25 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300860#comment-15300860
 ] 

Sean Owen commented on SPARK-15531:
---

The problem is really that ulimit; it is going to make any java process without 
-Xmx fail right?
Still I wonder if it's just as well to put some liberal max heap size limit on 
the launcher... or do I misunderstand and the launcher process is not merely 
kicking off other processes?

> spark-class tries to use too much memory when running Launcher
> --
>
> Key: SPARK-15531
> URL: https://issues.apache.org/jira/browse/SPARK-15531
> Project: Spark
>  Issue Type: Bug
>  Components: Deploy
>Affects Versions: 1.6.1, 2.0.0
> Environment: Linux running in Univa or Sun Grid Engine
>Reporter: mathieu longtin
>Priority: Minor
>  Labels: launcher
>
> When running Java on a server with a lot of memory but a rather small virtual 
> memory ulimit, Java will try to allocate a large memory pool and fail:
> {code}
> # System has 128GB of Ram but ulimit set to 7.5G
> $ ulimit -v
> 7812500
> $ java -client
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This is a known issue with Java, but unlikely to get fixed.
> As a result, when starting various Spark process (spark-submit, master or 
> workers), they fail when {{spark-class}} tries to run 
> {{org.apache.spark.launcher.Main}}.
> To fix this, add {{-Xmx128m}} to this line
> {code}
> "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main 
> "$\@"
> {code}
> (https://github.com/apache/spark/blob/master/bin/spark-class#L71)
> We've been using 128m and that works in our setup. Considering all the 
> launcher does is analyze the arguments and env var and spit out some command, 
> it should be plenty. All other calls to Java seem to include some value for 
> -Xmx, so it is not an issue elsewhere.
> I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m 
> (bigger, smaller, configurable, ...), so I'd rather it would be discussed 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15531) spark-class tries to use too much memory when running Launcher

2016-05-25 Thread mathieu longtin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300666#comment-15300666
 ] 

mathieu longtin commented on SPARK-15531:
-

The VM that spark-class launches afterwards has an -Xmx argument. It comes from 
--executor-memory or --driver-memory or some place else.

This is only a problem when letting Java decide what -Xmx should be. By 
default, it's a quarter of the physical memory, and it tries to reserve it 
right away, regardless of actual need.

> spark-class tries to use too much memory when running Launcher
> --
>
> Key: SPARK-15531
> URL: https://issues.apache.org/jira/browse/SPARK-15531
> Project: Spark
>  Issue Type: Bug
>  Components: Deploy
>Affects Versions: 1.6.1, 2.0.0
> Environment: Linux running in Univa or Sun Grid Engine
>Reporter: mathieu longtin
>Priority: Minor
>  Labels: launcher
>
> When running Java on a server with a lot of memory but a rather small virtual 
> memory ulimit, Java will try to allocate a large memory pool and fail:
> {code}
> # System has 128GB of Ram but ulimit set to 7.5G
> $ ulimit -v
> 7812500
> $ java -client
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This is a known issue with Java, but unlikely to get fixed.
> As a result, when starting various Spark process (spark-submit, master or 
> workers), they fail when {{spark-class}} tries to run 
> {{org.apache.spark.launcher.Main}}.
> To fix this, add {{-Xmx128m}} to this line
> {code}
> "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main 
> "$\@"
> {code}
> (https://github.com/apache/spark/blob/master/bin/spark-class#L71)
> We've been using 128m and that works in our setup. Considering all the 
> launcher does is analyze the arguments and env var and spit out some command, 
> it should be plenty. All other calls to Java seem to include some value for 
> -Xmx, so it is not an issue elsewhere.
> I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m 
> (bigger, smaller, configurable, ...), so I'd rather it would be discussed 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15531) spark-class tries to use too much memory when running Launcher

2016-05-25 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15300623#comment-15300623
 ] 

Marcelo Vanzin commented on SPARK-15531:


I'm a little confused: how can the VM launched afterwards, which is generally 
much larger, run?

> spark-class tries to use too much memory when running Launcher
> --
>
> Key: SPARK-15531
> URL: https://issues.apache.org/jira/browse/SPARK-15531
> Project: Spark
>  Issue Type: Bug
>  Components: Deploy
>Affects Versions: 1.6.1, 2.0.0
> Environment: Linux running in Univa or Sun Grid Engine
>Reporter: mathieu longtin
>Priority: Minor
>  Labels: launcher
>
> When running Java on a server with a lot of memory but a rather small virtual 
> memory ulimit, Java will try to allocate a large memory pool and fail:
> {code}
> # System has 128GB of Ram but ulimit set to 7.5G
> $ ulimit -v
> 7812500
> $ java -client
> Error occurred during initialization of VM
> Could not reserve enough space for object heap
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
> {code}
> This is a known issue with Java, but unlikely to get fixed.
> As a result, when starting various Spark process (spark-submit, master or 
> workers), they fail when {{spark-class}} tries to run 
> {{org.apache.spark.launcher.Main}}.
> To fix this, add {{-Xmx128m}} to this line
> {code}
> "$RUNNER" -Xmx128m -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main 
> "$\@"
> {code}
> (https://github.com/apache/spark/blob/master/bin/spark-class#L71)
> We've been using 128m and that works in our setup. Considering all the 
> launcher does is analyze the arguments and env var and spit out some command, 
> it should be plenty. All other calls to Java seem to include some value for 
> -Xmx, so it is not an issue elsewhere.
> I don't mind submitting a PR, but I'm sure somebody has opinions on the 128m 
> (bigger, smaller, configurable, ...), so I'd rather it would be discussed 
> first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org