pitrou commented on a change in pull request #8064: URL: https://github.com/apache/arrow/pull/8064#discussion_r485022899
########## File path: python/pyarrow/tests/test_parquet.py ########## @@ -2936,7 +2939,34 @@ def test_write_to_dataset_with_partitions_and_index_name( @pytest.mark.pandas @parametrize_legacy_dataset def test_write_to_dataset_no_partitions(tempdir, use_legacy_dataset): - _test_write_to_dataset_no_partitions(str(tempdir)) + _test_write_to_dataset_no_partitions(str(tempdir), use_legacy_dataset) + + +@pytest.mark.pandas +@parametrize_legacy_dataset +def test_write_to_dataset_pathlib(tempdir, use_legacy_dataset): + _test_write_to_dataset_with_partitions( + tempdir / "test1", use_legacy_dataset) + _test_write_to_dataset_no_partitions( + tempdir / "test2", use_legacy_dataset) + + +@pytest.mark.pandas +@pytest.mark.s3 +@parametrize_legacy_dataset +def test_write_to_dataset_pathlib_nonlocal( + tempdir, s3_example_s3fs, use_legacy_dataset +): + # pathlib paths are only accepted for local files + fs, _ = s3_example_s3fs Review comment: Hmm. Ideally we don't require S3 for such tests (because it's optional, and also because it makes tests much more expensive to run). Can we use a mock filesystem or something? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org