[
https://issues.apache.org/jira/browse/ARROW-2082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436439#comment-16436439
]
Joshua Storck commented on ARROW-2082:
--------------------------------------
I did some debugging and isolated the issue. The column writer that is being
created is int64
([https://github.com/apache/parquet-cpp/blob/master-after-apache-parquet-cpp-1.4.0-rc1/src/parquet/column_writer.cc#L559),]
but the codepath taken for writing is assuming int96
([https://github.com/apache/parquet-cpp/blob/master-after-apache-parquet-cpp-1.4.0-rc1/src/parquet/arrow/writer.cc#L599|https://github.com/apache/parquet-cpp/blob/master-after-apache-parquet-cpp-1.4.0-rc1/src/parquet/arrow/writer.cc#L599.]).
Unfortunately, there is a static_cast that is being made here:
[https://github.com/apache/parquet-cpp/blob/master-after-apache-parquet-cpp-1.4.0-rc1/src/parquet/arrow/writer.cc#L378.]
That last section of code does a static cast to the wrong type. That means it's
writing 96 at a time when there's only space for 64 at a time. That's probably
corrupting memory. I have a feeling the location of the segfault would be
dependent on the input data.
I need to take a closer look and figure out what the best course of action is
here. I'm not fond of the static_cast. If we were using dynamic_cast here, at
least we could put an assertion in a debug build and/or check to make sure the
C types match between the writer_ and the value returned from the static_cast.
I suspect there is some mismatch between how the column metadata is initialized
and how it is used ArrayColumnWriter::WriteTimestamps.
> [Python] SegFault in pyarrow.parquet.write_table with specific options
> ----------------------------------------------------------------------
>
> Key: ARROW-2082
> URL: https://issues.apache.org/jira/browse/ARROW-2082
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.8.0
> Environment: tested on MacOS High Sierra with python 3.6 and Ubuntu
> Xenial (Python 3.5)
> Reporter: Clément Bouscasse
> Priority: Major
> Fix For: 0.10.0
>
>
> I originally filed an issue in the pandas project but we've tracked it down
> to arrow itself, when called via pandas in specific circumstances:
> [https://github.com/pandas-dev/pandas/issues/19493]
> basically using
> {code:java}
> df.to_parquet('filename.parquet', flavor='spark'){code}
> gives a seg fault if `df` contains a datetime column.
> Under the covers, pandas translates this to the following call:
> {code:java}
> pq.write_table(table, 'output.parquet', flavor='spark', compression='snappy',
> coerce_timestamps='ms')
> {code}
> which gives me an instant crash.
> There is a repro on the github ticket.
>
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)