randolf-scholz opened a new issue, #37055: URL: https://github.com/apache/arrow/issues/37055
### Describe the bug, including details regarding any error messages, version, and platform. I have a large dataset (>100M rows) with a `dictionary[int32,string]` column (`ChunkedArray`) and noticed that `compute.value_counts` is extremely slow for this column, compared to other columns. `table[col].value_counts()` is 10x-100x slower than `table[col].combine_chunks().value_counts()` in this case. ### Component(s) C++, Python -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
