Github user uncleGen commented on the issue:

    https://github.com/apache/spark/pull/15915
  
    Generally, there are three ways to fix this issue:
    
    1. Check if `chunkSize` exceed `Int.MaxValue` to narrow this issue.
    2. Provide a new config to set `chunkSize`.
    3. Reuse existing config as `chunkSize`, like `pageSize`.
    
    Basically, I don't like option 3, because users will never know the low 
level details and the effect to chunk size when modify `spark.buffer.pageSize`. 
Just like @viirya said:
    
    > `spark.broadcast.blockSize` has special meaning. I don't think we should 
replace it with pageSizeBytes
     
    Besides, `SparkEnv.get.memoryManager.pageSizeBytes` returns Long, there is 
still underlying integer overflow issue.
    
    Introducing a new config is also a good idea. So, I'd like to do some check 
for exceeding `Int.MaxValue` to narrow this issue. 
    
    Any suggestions will be appreciated.
    
    @JoshRosen @srowen @viirya 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to