[ 
https://issues.apache.org/jira/browse/ARROW-16010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17511352#comment-17511352
 ] 

Dragoș Moldovan-Grünfeld commented on ARROW-16010:
--------------------------------------------------

TL/DR: I think the datetime _resolution_ required by your example is a bit too 
high for Arrow. Arrow currently supports up to _nanoseconds_ (10^-9 seconds), 
but not higher / lower than that. Using the default POSIXct -> timestamp 
conversion your data is actually rounded to the closest microsecond (the 
default for the Arrow timestamp).

The first thing that happens when you call {{write_parquet()}} is the R data 
frame gets transformed into an Arrow table:
{code:r}
a <- Table$create(df)
a
Table
1 rows x 3 columns
$x <string>
$n <double>
$t <timestamp[us]>

See $metadata for additional Schema metadata
{code}
The {{t}} column got translated to a timestamp with microseconds (us) as unit.

Going back to R, we can see that is where the difference comes from:
{code:r}
b <- a$to_data_frame()
sprintf("%.54f", b$t)
[1] "1631494810.376998901367187500000000000000000000000000000000000000"
b$t == pqt
[1] TRUE
{code}

Arrow can handle doubles/floats with higher precision. Hence there is no issue 
with column {{n}} in your data frame.

> [R] write_parquet alters <dttm> value
> -------------------------------------
>
>                 Key: ARROW-16010
>                 URL: https://issues.apache.org/jira/browse/ARROW-16010
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: R
>    Affects Versions: 6.0.0
>         Environment: Ubuntu focal
> R 4.1.1
> RStudio 1.4.1772
>            Reporter: Riaz Arbi
>            Priority: Minor
>
> When we write a dataframe column of type `<dttm>` to parquet using the arrow 
> package, subsequent reading in of the parquet file to dataframe returns a 
> slightly different value.
> This behaviour does not replicate with columns of type `<double>`
>  
> Reprex:
>  
> {code:java}
>  
> #Create sample dataframe
> n <-  1631494810.376999855041503906250000000000000000000000000000000000
> df <- data.frame(x = "a",
>                  n = n,
>                  t = as.POSIXct(n, origin = "1970-01-01"))
> #Write to disk
> df %>% write_parquet("/tmp/tmp.parquet")
> #Extract time-based cols
> dft <- df %>% 
>   filter(x == "a") %>% 
>   pull(t) %>% 
>   as.numeric 
> pqt <- read_parquet("/tmp/tmp.parquet") %>% 
>   filter(x == "a") %>% 
>   pull(t) %>% 
>   as.numeric 
> dft == pqt
> sprintf("%.54f",dft)
> sprintf("%.54f",pqt)
> #Extract numeric cols
> dfn <- df %>% 
>   filter(x == "a") %>% 
>   pull(n) %>% 
>   as.numeric 
> pqn <- read_parquet("/tmp/tmp.parquet") %>% 
>   filter(x == "a") %>% 
>   pull(n) %>% 
>   as.numeric 
> dfn == pqn
> sprintf("%.54f",dfn)
> sprintf("%.54f",pqn) {code}
>  
> The critical issue is that `dft == pqt` returns `FALSE` while `dfn == pqn` 
> returns TRUE.
>  
> Why is this a problem? We use `arrow` to store dataframes to disk. When we 
> want to update these parquet files, we first check whether any data has 
> actually changed and put in place tripwires to ensure that if a significant 
> proportion of the data has changed the pipeline fails and is flagged for 
> manual review.
>  
> With the current behaviour, above, all of the dataframes that contain 
> `<dttm>` type columns are failing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to