jorisvandenbossche commented on a change in pull request #8064:
URL: https://github.com/apache/arrow/pull/8064#discussion_r485028789



##########
File path: python/pyarrow/tests/test_parquet.py
##########
@@ -2936,7 +2939,34 @@ def test_write_to_dataset_with_partitions_and_index_name(
 @pytest.mark.pandas
 @parametrize_legacy_dataset
 def test_write_to_dataset_no_partitions(tempdir, use_legacy_dataset):
-    _test_write_to_dataset_no_partitions(str(tempdir))
+    _test_write_to_dataset_no_partitions(str(tempdir), use_legacy_dataset)
+
+
[email protected]
+@parametrize_legacy_dataset
+def test_write_to_dataset_pathlib(tempdir, use_legacy_dataset):
+    _test_write_to_dataset_with_partitions(
+        tempdir / "test1", use_legacy_dataset)
+    _test_write_to_dataset_no_partitions(
+        tempdir / "test2", use_legacy_dataset)
+
+
[email protected]
[email protected]
+@parametrize_legacy_dataset
+def test_write_to_dataset_pathlib_nonlocal(
+    tempdir, s3_example_s3fs, use_legacy_dataset
+):
+    # pathlib paths are only accepted for local files
+    fs, _ = s3_example_s3fs

Review comment:
       The problem is that this is using the "legacy" filesystems (I am going 
to add support for the new filesystems when the dataset writing with parquet is 
merged, and even then it might take a different code path), and I don't think 
we have a mock filesystem in pyarrow.filesytem? (only the local, since we also 
can't use hdfs)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to