[
https://issues.apache.org/jira/browse/ARROW-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256218#comment-17256218
]
John Sheffield edited comment on ARROW-11067 at 12/30/20, 1:35 AM:
-------------------------------------------------------------------
(Sorry for the fragmented report here, but figured out a way to really isolate
the issue.)
The string read failures are deterministic and predictable, and the content of
the strings doesn't seem to matter – only length. There's a switch between
success/failure at every integer multiple of *N * (32 * 1024) characters*.
* For N in [0,1), string length between 0 and 32767 characters, all reads
succeed.
* For N in [1,2), string length 32768 and 65535, all of the reads fail.
* The same pattern repeats until we hit LongString limits: if floor(nchar/(32
* 1024) is 0 or even, the read succeeds. If floor(nchar/(32 * 1024) is odd, it
fails.
Code:
{code:java}
library(tidyverse)
library(arrow)
generate_string <- function(n){
paste0(sample(c(LETTERS, letters), size = n, replace = TRUE), collapse = "")
}
sample_breaks <- (1:60L * 16L * 1024L)
sample_lengths <- sample_breaks - 1
set.seed(1234)
test_strings <- purrr::map_chr(sample_lengths, generate_string)
readr::write_csv(data.frame(str = test_strings, strlen = sample_lengths),
"arrow_sample_data.csv")
arrow::read_csv_arrow("arrow_sample_data.csv") %>%
dplyr::mutate(failed_case = ifelse(is.na(str), "failed", "succeeded")) %>%
dplyr::select(-str) %>%
ggplot(data = ., aes(x = (strlen / (32 * 1024)), y = failed_case)) +
geom_point(aes(color = ifelse(floor(strlen / (32 * 1024)) %% 2 == 0, "even",
"odd")), size = 3) +
scale_x_continuous(breaks = seq(0, 30)) +
labs(x = "string length / (32 * 1024) : integer multiple of 32kb",
y = "string read success/failure",
color = "even/odd multiple of 32kb")
{code}
!arrow_explanation.png!
was (Author: jms):
(Sorry for the fragmented report here, but figured out a way to really isolate
the issue.)
The string read failures are deterministic and predictable, and the content of
the strings doesn't seem to matter – only length. There's a switch between
success/failure at every integer multiple of *N * (32 * 1024) characters*.
* For N in [0,1), string length between 0 and 32767 characters, all reads
succeed.
* For N in [1,2], string length 32768 and 65535, all of the reads fail.
* The same pattern repeats until we hit LongString limits: if floor(nchar/(32
* 1024) is 0 or even, the read succeeds. If floor(nchar/(32 * 1024) is odd, it
fails.
Code:
{code:java}
library(tidyverse)
library(arrow)
generate_string <- function(n){
paste0(sample(c(LETTERS, letters), size = n, replace = TRUE), collapse = "")
}
sample_breaks <- (1:60L * 16L * 1024L)
sample_lengths <- sample_breaks - 1
set.seed(1234)
test_strings <- purrr::map_chr(sample_lengths, generate_string)
readr::write_csv(data.frame(str = test_strings, strlen = sample_lengths),
"arrow_sample_data.csv")
arrow::read_csv_arrow("arrow_sample_data.csv") %>%
dplyr::mutate(failed_case = ifelse(is.na(str), "failed", "succeeded")) %>%
dplyr::select(-str) %>%
ggplot(data = ., aes(x = (strlen / (32 * 1024)), y = failed_case)) +
geom_point(aes(color = ifelse(floor(strlen / (32 * 1024)) %% 2 == 0, "even",
"odd")), size = 3) +
scale_x_continuous(breaks = seq(0, 30)) +
labs(x = "string length / (32 * 1024) : integer multiple of 32kb",
y = "string read success/failure",
color = "even/odd multiple of 32kb")
{code}
!arrow_explanation.png!
> [R] read_csv_arrow silently fails to read some strings and returns nulls
> ------------------------------------------------------------------------
>
> Key: ARROW-11067
> URL: https://issues.apache.org/jira/browse/ARROW-11067
> Project: Apache Arrow
> Issue Type: Bug
> Components: R
> Reporter: John Sheffield
> Priority: Major
> Fix For: 3.0.0
>
> Attachments: arrow_explanation.png, arrow_failure_cases.csv,
> arrow_failure_cases.csv, arrowbug1.png, arrowbug1.png, demo_data.csv
>
>
> A sample file is attached, showing 10 rows each of strings with consistent
> failures (false_na = TRUE) and consistent successes (false_na = FALSE). The
> strings are in the column `json_string` – if relevant, they are geojsons with
> min nchar of 33,229 and max nchar of 202,515.
> When I read this sample file with other R CSV readers (readr and data.table
> shown), the files are imported correctly and there are no NAs in the
> json_string column.
> When I read with arrow::read_csv_arrow, 50% of the sample json_string column
> end up as NAs. as_data_frame TRUE or FALSE does not change the behavior, so
> this might not be limited to the R interface, but I can't help debug much
> further upstream.
>
>
> {code:java}
> aaa1 <- arrow::read_csv_arrow("demo_data.csv", as_data_frame = TRUE)
> aaa2 <- arrow::read_csv_arrow("demo_data.csv", as_data_frame = FALSE)
> bbb <- data.table::fread("demo_data.csv")
> ccc <- readr::read_csv("demo_data.csv")
> mean(is.na(aaa1$json_string)) # 0.5
> mean(is.na(aaa2$column(1))) # Scalar 0.5
> mean(is.na(bbb$json_string)) # 0
> mean(is.na(ccc$json_string)) # 0{code}
>
>
> * arrow 2.0 (latest CRAN)
> * readr 1.4.0
> * data.table 1.13.2
> * R version 4.0.1 (2020-06-06)
> * MacOS Catalina 10.15.7 / x86_64-apple-darwin17.0
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)