[ https://issues.apache.org/jira/browse/ARROW-6417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921168#comment-16921168 ]
Wes McKinney commented on ARROW-6417: ------------------------------------- OK, I think to make things faster we need to be more careful about pre-allocating with {{BinaryBuilder}} and calling {{BaseBinaryBuilder<T>::UnsafeAppend}} instead of {{Append}}. It's a bit tricky because we have {{ChunkedBinaryBuilder}} in the mix, so we may have to manage the creation of chunks in the Parquet value decoder. I think this is worth the effort given how much of a hot path this is for reading Parquet files. I'll spend a little time on it tomorrow > [C++][Parquet] Non-dictionary BinaryArray reads from Parquet format have > slowed down since 0.11.x > ------------------------------------------------------------------------------------------------- > > Key: ARROW-6417 > URL: https://issues.apache.org/jira/browse/ARROW-6417 > Project: Apache Arrow > Issue Type: Improvement > Components: C++, Python > Reporter: Wes McKinney > Priority: Major > Attachments: 20190903_parquet_benchmark.py, > 20190903_parquet_read_perf.png > > > In doing some benchmarking, I have found that binary reads seem to be slower > from Arrow 0.11.1 to master branch. It would be a good idea to do some basic > profiling to see where we might improve our memory allocation strategy (or > whatever the bottleneck turns out to be) -- This message was sent by Atlassian Jira (v8.3.2#803003)