[ 
https://issues.apache.org/jira/browse/SPARK-55361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Casimir Giesler updated SPARK-55361:
------------------------------------
    Description: 
python/docs/source/tutorial/sql/python_data_source.rst gives an example for 
using PyArrow RecordBatch.

In the example, the Schema is specified twice, once for Spark
{code:java}
def schema(self):
   return "key int, value string" {code}
and then again for PyArrow
{code:java}
def read(self, partition):
   ...
   schema = pa.schema([("key", pa.int32()), ("value", pa.string())]) {code}
I am proposing to change the documentation to only specify a Spark schema and 
to use 
{{{}pyspark.sql.pandas.types.{}}}{{{}to_arrow_schema(){}}} to convert the Spark 
schema to arrow.

 

  was:
python/docs/source/tutorial/sql/python_data_source.rst gives an example for 
using PyArrow RecordBatch.

In the example, the Schema is specified twice, once for Spark
{code:java}
def schema(self):
   return "key int, value string" {code}
and then again for PyArrow

 
{code:java}
def read(self, partition):
   ...
   schema = pa.schema([("key", pa.int32()), ("value", pa.string())]) {code}
 

I am proposing to change the documentation to only specify a Spark schema and 
to use 
{{{}pyspark.sql.pandas.types.{}}}{{to_arrow_schema()}} to convert the Spark 
schema to arrow.


> add to_arrow_schema python_data_source.rst to avoid double-specifying schema
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-55361
>                 URL: https://issues.apache.org/jira/browse/SPARK-55361
>             Project: Spark
>          Issue Type: Documentation
>          Components: Documentation, Examples, PySpark
>    Affects Versions: 4.0.0, 4.0.1, 4.1.1
>            Reporter: Casimir Giesler
>            Priority: Minor
>
> python/docs/source/tutorial/sql/python_data_source.rst gives an example for 
> using PyArrow RecordBatch.
> In the example, the Schema is specified twice, once for Spark
> {code:java}
> def schema(self):
>    return "key int, value string" {code}
> and then again for PyArrow
> {code:java}
> def read(self, partition):
>    ...
>    schema = pa.schema([("key", pa.int32()), ("value", pa.string())]) {code}
> I am proposing to change the documentation to only specify a Spark schema and 
> to use 
> {{{}pyspark.sql.pandas.types.{}}}{{{}to_arrow_schema(){}}} to convert the 
> Spark schema to arrow.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to