Casimir Giesler created SPARK-55361:
---------------------------------------
Summary: add to_arrow_schema python_data_source.rst to avoid
double-specifying schema
Key: SPARK-55361
URL: https://issues.apache.org/jira/browse/SPARK-55361
Project: Spark
Issue Type: Documentation
Components: Documentation, Examples, PySpark
Affects Versions: 4.1.1, 4.0.1, 4.0.0
Reporter: Casimir Giesler
python/docs/source/tutorial/sql/python_data_source.rst gives an example for
using PyArrow RecordBatch.
In the example, the Schema is specified twice, once for Spark
{code:java}
def schema(self):
return "key int, value string" {code}
and then again for PyArrow
{code:java}
def read(self, partition):
...
schema = pa.schema([("key", pa.int32()), ("value", pa.string())]) {code}
I am proposing to change the documentation to only specify a Spark schema and
to use
{{{}pyspark.sql.pandas.types.{}}}{{to_arrow_schema()}} to convert the Spark
schema to arrow.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]