casgie commented on code in PR #54180:
URL: https://github.com/apache/spark/pull/54180#discussion_r2787609322
##########
python/docs/source/tutorial/sql/python_data_source.rst:
##########
@@ -534,14 +535,14 @@ The following example demonstrates how to implement a
basic Data Source using Ar
class ArrowBatchDataSourceReader(DataSourceReader):
def __init__(self, schema, options):
self.schema: str = schema
+ self.arrow_schema = to_arrow_schema(self.schema)
Review Comment:
Yes, curiously it worked (tested on Databricks DBR 17.3 with Spark 4.0.0)
We can ofc change to
```python
def schema(self):
return StructType([
StructField("key", IntegerType(), True),
StructField("value", StringType(), True),
])
```
Regarding your second point yes & fair, but maybe we can find another way to
avoid having users specify schemas twice?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]