Github user koeninger commented on the pull request:

    https://github.com/apache/spark/pull/11921#issuecomment-200877678
  
    I don't know that this PR will do much harm, since it's just one more 
conditional check and KafkaRDD has a storage level of none by default.
    
    But I also don't know that it's actually a good solution.
    
    Because the PR doesn't have any tests, and the JIRA doesn't actually have 
any code that reproduces the problem, I don't know what you were trying to 
accomplish.
    
    I don't know that caching a KafkaRDD directly is something that ever makes 
sense to do, as opposed to caching the result of a transformation applied to a 
KafkaRDD.  For instance, the iterator for the KafkaRDD for the new 0.10 Kafka 
consumer probably won't be serializable at all, and the discussion around that 
point seems to agree that's a good thing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to