bkietz commented on a change in pull request #9130:
URL: https://github.com/apache/arrow/pull/9130#discussion_r555785945



##########
File path: python/pyarrow/tests/test_dataset.py
##########
@@ -2315,6 +2315,29 @@ def test_write_dataset_partitioned(tempdir):
         partitioning=partitioning_schema)
 
 
+@pytest.mark.parquet
+@pytest.mark.pandas
+def test_write_dataset_partitioned_dict(tempdir):
+    directory = tempdir / "partitioned"
+    _ = _create_parquet_dataset_partitioned(directory)
+
+    # directory partitioning, dictionary partition columns
+    dataset = ds.dataset(
+        directory,
+        partitioning=ds.HivePartitioning.discover(infer_dictionary=True))
+    target = tempdir / 'partitioned-dir-target'
+    expected_paths = [
+        target / "a", target / "a" / "part-0.feather",
+        target / "b", target / "b" / "part-1.feather"
+    ]
+    partitioning_schema = ds.partitioning(pa.schema([
+        dataset.schema.field('part')]),
+        dictionaries=[pa.array(['a', 'b'])])

Review comment:
       That's surprising; you should see errors: `No dictionary provided for 
dictionary field part: dictionary<values=string, indices=int32, ordered=0>` if 
you specify an incorrect dictionary and `Dictionary supplied for field part: 
dictionary<values=string, indices=int32, ordered=0> does not contain 'a'` if 
you specify a dictionary which doesn't include all the column's values
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to