[ 
https://issues.apache.org/jira/browse/ARROW-1957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16310234#comment-16310234
 ] 

Jordan Samuels commented on ARROW-1957:
---------------------------------------

[~wesmckinn] Thanks for your response and explanation.  I'm still not clear on 
*why* nanosecond resolution timestamps are deprecated - is that for performance 
reasons?  What are the medium-/long-term ramifications of using the deprecated 
option you mention?

> Handle nanosecond timestamps in parquet serialization
> -----------------------------------------------------
>
>                 Key: ARROW-1957
>                 URL: https://issues.apache.org/jira/browse/ARROW-1957
>             Project: Apache Arrow
>          Issue Type: Improvement
>    Affects Versions: 0.8.0
>         Environment: Python 3.6.4.  Mac OSX and CentOS Linux release 
> 7.3.1611.  Pandas 0.21.1 .
>            Reporter: Jordan Samuels
>            Priority: Minor
>
> The following code
> {code}
> import pyarrow as pa
> import pyarrow.parquet as pq
> import pandas as pd
> n=3
> df = pd.DataFrame({'x': range(n)}, index=pd.DatetimeIndex(start='2017-01-01', 
> freq='1n', periods=n))
> pq.write_table(pa.Table.from_pandas(df), '/tmp/t.parquet'){code}
> results in:
> {{ArrowInvalid: Casting from timestamp[ns] to timestamp[us] would lose data: 
> 1483228800000000001}}
> The desired effect is that we can save nanosecond resolution without losing 
> precision (e.g. conversion to ms).  Note that if {{freq='1u'}} is used, the 
> code runs properly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to