No, as I say, it seems to just generate a warning. OOPS can't be used with
>= 32GB heap, so it just isn't. That's why I am asking what the problem is.
Spark doesn't set this value as far as I can tell; maybe your env does.
This is in any event not a Spark issue per se.

On Mon, Mar 23, 2020 at 9:40 AM angers.zhu <angers....@gmail.com> wrote:

> If -Xmx is bigger then 32g, vm will not to use  UseCompressedOops as
> default,
> We can see a case,
> If we set spark.driver.memory is 64g, set -XX:+UseCompressedOops in
> spark.executor.extralJavaOptions, and set SPARK_DAEMON_MEMORY = 6g,
> Use current code , vm will got command like with  -Xmx6g and 
> -XX:+UseCompressedOops
> , then vm will be -XX:+UseCompressedOops  and use Oops compressed
>
> But since we set spark.driver.memory=64g, our jvm’s max heap size will be
> 64g,  but we will use compressed Oops ,  Wouldn't that be a problem?
>

Reply via email to