Github user mateiz commented on the pull request:

    https://github.com/apache/spark/pull/455#issuecomment-44794147
  
    Regarding docs, there's now a longer section on data import in the 
programming guide 
(http://spark.apache.org/docs/latest/programming-guide.html#external-datasets), 
so you can add it to the Python part of that. Eventually we should probably 
split out data loading into a separate doc, with tutorials for HBase, 
Cassandra, etc.
    
    Regarding PythonConverter, it's fine to make it experimental for now. 
Ideally we would look at it and keep it forever though. It's a simple enough 
interface (given an Object, return another Object) that we can probably keep it 
indefinitely, even if we later come up with a nicer / more optimized one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to