10110346 opened a new pull request #24187: [SPARK-27256][CORE][SQL]If the 
configuration is used to set the number of bytes, we'd better use `bytesConf`'.
URL: https://github.com/apache/spark/pull/24187
 
 
   ## What changes were proposed in this pull request?
   Currently, if we want to configure `spark.sql.files.maxPartitionBytes` to 
256 megabytes, we must set  `spark.sql.files.maxPartitionBytes=268435456`, 
which is very unfriendly to users.
   
   And if we set it like this:`spark.sql.files.maxPartitionBytes=256M`, we will 
 encounter this exception:
   ```
   Exception in thread "main" java.lang.IllegalArgumentException: 
spark.sql.files.maxPartitionBytes should be long, but was 256M
           at 
org.apache.spark.internal.config.ConfigHelpers$.toNumber(ConfigBuilder.scala:34)
   ```
   This PR use `bytesConf` to replace `longConf` or `intConf`,  if the 
configuration is used to set the number of bytes.
   ## How was this patch tested?
   1.Existing unit tests 
   2.Manual testing

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to