liuxian created SPARK-27256: ------------------------------- Summary: If the configuration is used to set the number of bytes, we'd better use `bytesConf`'. Key: SPARK-27256 URL: https://issues.apache.org/jira/browse/SPARK-27256 Project: Spark Issue Type: Improvement Components: Spark Core, SQL Affects Versions: 3.0.0 Reporter: liuxian
Currently, if we want to configure `spark. sql. files. maxPartitionBytes` to 256 megabytes, we must set `spark. sql. files. maxPartitionBytes=268435456`, which is very unfriendly to users. And if we set it like this:`spark. sql. files. maxPartitionBytes=256M`, we will encounter this exception: _Exception in thread "main" java.lang.IllegalArgumentException: spark.sql.files.maxPartitionBytes should be long, but was 128M_ _at org.apache.spark.internal.config.ConfigHelpers$.toNumber(ConfigBuilder.scala:34)_ -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org