westonpace commented on code in PR #33969:
URL: https://github.com/apache/arrow/pull/33969#discussion_r1100781597
##########
python/pyarrow/tests/test_dataset.py:
##########
@@ -5084,10 +5084,8 @@ def test_dataset_partition_with_slash(tmpdir):
read_table = ds.dataset(
source=path,
format='ipc',
- partitioning='hive',
- schema=pa.schema([pa.field("exp_id", pa.int32()),
- pa.field("exp_meta", pa.utf8())])
- ).to_table().combine_chunks()
+ schema=dt_table.schema,
Review Comment:
I am also confused. Here is my understanding of this test:
* A user wants to partition on column exp_meta
* The values in exp_meta contain slashes
* We should be able to partition on that column and still write and then
read the dataset
The files that will get created will be of the shape:
```
<tmpdir>/slash-writer-x/exp_meta=experiment%2FA%2Ff.csv/chunk-0.parquet
```
There should be no need to specify a schema manually to read this dataset.
It should also be harmless to specify a schema. However, doing so makes the
test less clear.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]