[ 
https://issues.apache.org/jira/browse/PARQUET-1463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16704070#comment-16704070
 ] 

Wes McKinney commented on PARQUET-1463:
---------------------------------------

Using the following setup

{code}
import pandas as pd
import pyarrow as pa
import numpy as np

K = 1000
N = 50000000
df = pd.DataFrame({'ints': np.tile(np.arange(K), N // K)})
table = pa.Table.from_pandas(df)
{code}

Here all values will end up represented in the dictionary as literal runs. I 
didn't find any appreciable difference in performance

{code}
# BEFORE

# In [12]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=True)
# CPU times: user 1.5 s, sys: 132 ms, total: 1.64 s
# Wall time: 1.63 s

# In [13]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=False)
# CPU times: user 1.28 s, sys: 148 ms, total: 1.43 s
# Wall time: 1.43 s

# AFTER

# In [4]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=True)
# CPU times: user 1.56 s, sys: 120 ms, total: 1.68 s
# Wall time: 1.69 s

# In [5]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=True)
# CPU times: user 1.5 s, sys: 116 ms, total: 1.62 s
# Wall time: 1.61 s

# In [6]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=True)
# CPU times: user 1.5 s, sys: 108 ms, total: 1.61 s
# Wall time: 1.61 s

# In [8]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=False)
# CPU times: user 1.31 s, sys: 120 ms, total: 1.43 s
# Wall time: 1.44 s

# In [9]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=False)
# CPU times: user 1.29 s, sys: 116 ms, total: 1.41 s
# Wall time: 1.41 s

# In [10]: %time pq.write_table(table, pa.BufferOutputStream(), 
use_dictionary=False)
# CPU times: user 1.32 s, sys: 112 ms, total: 1.44 s
# Wall time: 1.44 s
{code}

I was mainly interested if the inner working of the hash table caused any 
overhead in this case. I would guess string dictionary writes are a bit faster 
now with the better hashing path. 

> [C++] Utilize revamped common hashing machinery for dictionary encoding
> -----------------------------------------------------------------------
>
>                 Key: PARQUET-1463
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1463
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-cpp
>            Reporter: Wes McKinney
>            Assignee: Antoine Pitrou
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: cpp-1.6.0
>
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> [~pitrou] has recently made some significant improvements to hashing / 
> dictionary encoding machinery in Apache Arrow
> https://github.com/apache/arrow/commit/eaf8d32e5f292dca0aa5b5508041d5d39406224d
> parquet-cpp is using a custom hash table
> https://github.com/apache/arrow/blob/master/cpp/src/parquet/encoding-internal.h#L456
> It would be nice to utilize common hash table machinery if possible. We 
> should of course make sure that such a change does not cause performance 
> regressions (performance improved due to Antoine's patch, so perf may also 
> get better on the Parquet write path)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to