ninebigbig opened a new pull request, #39884:
URL: https://github.com/apache/spark/pull/39884

   The default size of the CONFIG_MAP_MAXSIZE should not be greater than 1048576
   
   ### What changes were proposed in this pull request?
   This PR changed the default size of the  CONFIG_MAP_MAXSIZE from 1572864(1.5 
MiB) to 1048576(1.0 MiB)
   
   
   ### Why are the changes needed?
   When a job is submitted by the spark to the K8S with a configmap , The 
Spark-Submit will call the K8S‘s POST API 
"api/v1/namespaces/default/configmaps". And the size of the configmaps will be 
validated by this K8S API,the max value shoud not be greater than 1048576.
   In the previous comment,the explain in 
https://etcd.io/docs/v3.4/dev-guide/limit/ is:
   "etcd is designed to handle small key value pairs typical for metadata. 
Larger requests will work, but may increase the latency of other requests. By 
default, the maximum size of any request is 1.5 MiB. This limit is configurable 
through --max-request-bytes flag for etcd server."
   This explanation is from the perspective of etcd ,not K8S.
   So I think the default value of the configmap in Spark should not be greate 
than 1048576.
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   Yes.
   Generally, the size of the configmap will not exceed 1572864 or even 1048576.
   So the problem solved here may not be perceived by users.
   
   
   ### How was this patch tested?
   local test
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to