qiaojizhen commented on PR #53383:
URL: https://github.com/apache/spark/pull/53383#issuecomment-3845550001

   > > > > @VindhyaG I have found another issue. If 
spark.memory.offHeap.enabled = true, the Driver actually won't apply the 
offHeap memory in any cluster deploy mode. But now the Driver is displaying the 
number which is set by spark.memory.offHeap.size. So I think the Driver in the 
web ui should display zero for "Off Heap Memory". Could u please solve this 
problem in this PR by the way? Thanks!
   > > > 
   > > > 
   > > > @qiaojizhen I did not get what you mean by Driver won't aply offheap 
in cluster mode. Is it not supported in cluster mode you mean?
   > > 
   > > 
   > > @VindhyaG Yes! The off-heap memory won't work for Driver in any cluster 
mode! If I use the following config:
   > > ```
   > > spark.driver.memory=1g 
   > > spark.driver.memoryOverhead=1g 
   > > spark.memory.offHeap.enabled=true
   > > spark.memory.offHeap.size=2g
   > > ```
   > > 
   > > 
   > >     
   > >       
   > >     
   > > 
   > >       
   > >     
   > > 
   > >     
   > >   
   > > when I submit the job to YARN, the Driver container only gets 2G 
memory(including spark.driver.memory & spark.driver.memoryOverhead): <img 
alt="image" width="452" height="244" 
src="https://private-user-images.githubusercontent.com/225225490/544146519-e2f125e2-98f2-408e-8ec4-1b22cace0684.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzAxODA0NDcsIm5iZiI6MTc3MDE4MDE0NywicGF0aCI6Ii8yMjUyMjU0OTAvNTQ0MTQ2NTE5LWUyZjEyNWUyLTk4ZjItNDA4ZS04ZWM0LTFiMjJjYWNlMDY4NC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjYwMjA0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI2MDIwNFQwNDQyMjdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT0zMWU5ZTgxODNiNGQwYzhkZmEyN2E3ZDZmNjYxZmRkZGI0OTJmMDgxY2FhYmU0OTEwNjAzNTRjMDg5NDZlM2VjJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.1V-uevxNl8ffOWTdd3-8PkFeOzhuSwQ_SYABOx_Dq1U";>
   > > but the web ui still display the off-heap memory: <img alt="image" 
width="648" height="387" 
src="https://private-user-images.githubusercontent.com/225225490/544146561-4da1b9c4-410e-4069-a708-7165c8fdd5d2.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzAxODA0NDcsIm5iZiI6MTc3MDE4MDE0NywicGF0aCI6Ii8yMjUyMjU0OTAvNTQ0MTQ2NTYxLTRkYTFiOWM0LTQxMGUtNDA2OS1hNzA4LTcxNjVjOGZkZDVkMi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjYwMjA0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI2MDIwNFQwNDQyMjdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1hYTFkOGRhMmRkNGIxODI5YjI0ODAyNzQ1NGJhNjBhYzM4YTE3MzNiYjlhOWY1ZDg4ZTQ0ZDQ3MjBiNmIxNDhjJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.ByBVSigFXABeRIMT_dRL2i884v8FLxPMbeKlKIbdJuo";>
   > > and the soucre code here [Spark Yarn 
Client](https://github.com/apache/spark/blob/master/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala)
 has proved that it will only use spark.driver.memory & 
spark.driver.memoryOverhead when applying the AM memory: <img alt="image" 
width="1586" height="747" 
src="https://private-user-images.githubusercontent.com/225225490/544149621-19b730f5-3751-4396-a6be-9a3bf5de9f5f.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzAxODA0NDcsIm5iZiI6MTc3MDE4MDE0NywicGF0aCI6Ii8yMjUyMjU0OTAvNTQ0MTQ5NjIxLTE5YjczMGY1LTM3NTEtNDM5Ni1hNmJlLTlhM2JmNWRlOWY1Zi5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjYwMjA0JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI2MDIwNFQwNDQyMjdaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mMjY2NTYxMmM0ZjY2ZjY5Yzc3ZDBjNDE3NjBmMzY4MzQzO
 
TYyODI0MThkYjA3YjNkMjRjN2ZiMzZmMGVhZmU2JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.QVld_kOC_JzoJkD6pqdWGEwjnfjmhxU0Rlt6IeNlARA">
   > 
   > @qiaojizhen So basically, Driver will not have off heap no matter the 
setting. Is that correct? Should we not have something like NA/null (- in UI) 
will be more clear I suppose. Or should we just make it 0?
   
   @VindhyaG Yes, in cluster mode, the Off-Heap of the Driver does not take 
effect regardless of whether the setting is enabled or not.  I think "make it 
0" is consistent with the existing logic.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to