[ 
https://issues.apache.org/jira/browse/ARROW-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17149450#comment-17149450
 ] 

Joris Van den Bossche commented on ARROW-9288:
----------------------------------------------

Yes, I already debugged it, but was trying to look for a solution as well 
before posting it here. But my notes up to now:

>From time to time, it also gives "ArrowInvalid: No dictionary provided for 
>dictionary field part: dictionary<values=string, indices=int32, ordered=0>" 
>instead of segfaulting.

>From debugging, it seems the problem is here: 
>[https://github.com/apache/arrow/blob/f25a014ab157d5538354309dda721cc8bb938125/cpp/src/arrow/dataset/partition.cc#L159]
> 
 The {{field_index}} is different for hive vs directory partitioning (or maybe 
whether an explicit schema is provided or not). For DirectoryPartitioning, the 
schema passed to {{ConvertKey}} is the schema of _only_ the partition fields, 
and thus {{field_index}} is 0-based into that. But for HivePartitioning, the 
schema passed is the schema of the full dataset, so the {{field_index}} 
typically doesn't start at 0 since we by default append partition fields at the 
end. This causes the field_index to be out of bounds for the dictionaries 
(which has a length depending on the number of partition fields)

> [C++][Dataset] Discovery of partition field as dictionary type segfaulting 
> with HivePartitioning
> ------------------------------------------------------------------------------------------------
>
>                 Key: ARROW-9288
>                 URL: https://issues.apache.org/jira/browse/ARROW-9288
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++
>            Reporter: Joris Van den Bossche
>            Priority: Major
>              Labels: dataset
>             Fix For: 1.0.0
>
>
> Testing new feature from ARROW-8647, python test that reproduces it:
> {code:python}
> @pytest.mark.parquet
> @pytest.mark.parametrize('partitioning', ["directory", "hive"])
> def test_open_dataset_partitioned_dictionary_type(tempdir, partitioning):
>     import pyarrow.parquet as pq
>     table = pa.table({'a': range(9), 'b': [0.] * 4 + [1.] * 5})
>     path = tempdir / "dataset"
>     path.mkdir()
>     for part in ["A", "B", "C"]:
>         fmt = "{}" if partitioning == "directory" else "part={}"
>         part = path / fmt.format(part)
>         part.mkdir()
>         pq.write_table(table, part / "test.parquet")
>     if partitioning == "directory":
>         part = ds.DirectoryPartitioning.discover(["part"], 
> max_partition_dictionary_size=-1)
>     else:
>         part = ds.HivePartitioning.discover(max_partition_dictionary_size=-1)
>     
>     dataset = ds.dataset(str(path), partitioning=part)
>     expected_schema = table.schema.append(
>         pa.field("part", pa.dictionary(pa.int32(), pa.string()))
>     )
>     assert dataset.schema.equals(expected_schema)
> {code}
> This test fails (segfaults) for HivePartitioning, but works for 
> DirectoryPartitioning



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to