Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/5147#issuecomment-85392750
  
    > In general we have avoided exposing Partition's directly to users.
    
    Users can get access to Partition objects by implementing custom RDDs, so I 
suppose a third-party library could expose something like 
`mapPartitionsWithPartition`, and then use that to construct arguments to a 
ProcessBuilder, etc.
    
    > Since this can be implemented as a helper function outside Spark, maybe 
this doesn't need to go into the core API itself?
    
    In retrospect, most of the PipedRDD enhancements probably should have gone 
into libraries rather than core, since the current API is already kind of 
unwieldy.
     


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to