[ 
https://issues.apache.org/jira/browse/SPARK-27124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789562#comment-16789562
 ] 

Hyukjin Kwon edited comment on SPARK-27124 at 3/11/19 1:32 PM:
---------------------------------------------------------------

I think we can mention that generally somewhere, say, Py4J approach can be used 
to access to internal codes in Spark. To be honest, this approach requires to 
understand Py4J. So, we shouldn't focus on how to use it tho. Technically Py4J 
should describe how to use it.


was (Author: hyukjin.kwon):
I think we can mention that generally somewhere, say, Py4J approach can be used 
to access to internal codes in Spark. To be honest, this approach requires to 
understand Py4J. So, we shouldn't focus on how to use it tho.

> Expose org.apache.spark.sql.avro.SchemaConverters as developer API
> ------------------------------------------------------------------
>
>                 Key: SPARK-27124
>                 URL: https://issues.apache.org/jira/browse/SPARK-27124
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, SQL
>    Affects Versions: 3.0.0
>            Reporter: Gabor Somogyi
>            Priority: Minor
>
> org.apache.spark.sql.avro.SchemaConverters provides extremely useful APIs to 
> convert schema between Spark SQL and avro. This is reachable from scala side 
> but not from pyspark. I suggest to add this as a developer API to ease 
> development for pyspark users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to