Github user marmbrus commented on the pull request:

    https://github.com/apache/spark/pull/5475#issuecomment-91931540
  
    Thanks for working on this and for adding a test!  I have two issues with 
the way this is being implemented:
     - We are still leaking accumulators on a per table scan basis, and so you 
are still going to have a problem if you never explicitly uncache the table
     - It seems like we should be able to do this without adding a global hash 
map.  In particular, when you are uncaching you have a handle to the 
InMemoryRelation.  Perhaps that class should define an `uncache` method on this 
class that takes care of both `unpersist`ing the data and unregistering the 
accumulators.
    
    The ones inside of the table scan are clearly harder to deal with, but 
perhaps we can just make those lazy vals that are only initialized during 
testing due to some conf being set?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to