This is an automated email from the ASF dual-hosted git repository.

jorisvandenbossche pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/arrow.git


The following commit(s) were added to refs/heads/master by this push:
     new 7a955f07b3 ARROW-16526: [Python] test_partitioned_dataset fails when 
building with PARQUET but without DATASET
7a955f07b3 is described below

commit 7a955f07b3472a36d9174eb71883f8f9c33083ae
Author: Weston Pace <[email protected]>
AuthorDate: Thu May 12 13:25:26 2022 +0200

    ARROW-16526: [Python] test_partitioned_dataset fails when building with 
PARQUET but without DATASET
    
    One of the legacy parquet dataset tests was not properly passing 
use_legacy_dataset and this caused the test to attempt to use the new datasets 
module even if it wasn't enabled
    
    Closes #13116 from westonpace/bugfix/MINOR--missing-dataset-mark
    
    Authored-by: Weston Pace <[email protected]>
    Signed-off-by: Joris Van den Bossche <[email protected]>
---
 python/pyarrow/tests/parquet/test_dataset.py | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/python/pyarrow/tests/parquet/test_dataset.py 
b/python/pyarrow/tests/parquet/test_dataset.py
index 7b6845bbc2..2c660a3f6e 100644
--- a/python/pyarrow/tests/parquet/test_dataset.py
+++ b/python/pyarrow/tests/parquet/test_dataset.py
@@ -1542,7 +1542,8 @@ def test_partitioned_dataset(tempdir, use_legacy_dataset):
     })
     table = pa.Table.from_pandas(df)
     pq.write_to_dataset(table, root_path=str(path),
-                        partition_cols=['one', 'two'])
+                        partition_cols=['one', 'two'],
+                        use_legacy_dataset=use_legacy_dataset)
     table = pq.ParquetDataset(
         path, use_legacy_dataset=use_legacy_dataset).read()
     pq.write_table(table, path / "output.parquet")

Reply via email to