Github user jerryshao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14617#discussion_r109128494
  
    --- Diff: 
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
    @@ -74,8 +74,11 @@ class StorageStatusListener(conf: SparkConf) extends 
SparkListener {
         synchronized {
           val blockManagerId = blockManagerAdded.blockManagerId
           val executorId = blockManagerId.executorId
    -      val maxMem = blockManagerAdded.maxMem
    -      val storageStatus = new StorageStatus(blockManagerId, maxMem)
    +      // This two fields are compatible with old event logs, in which 
there only has max on heap
    +      // memory in the event log. So maxOnHeapMem will use maxMem, 
maxOffHeapMem will set to 0.
    +      val maxOnHeapMem = 
blockManagerAdded.maxOnHeapMem.getOrElse(blockManagerAdded.maxMem)
    --- End diff --
    
    This was changed by my last PR about on heap memory. So with that patch max 
memory was a sum of two memories. But if user has an older event log, then this 
max memory only refers to on heap memory.
    
    Let me update the code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to