[ 
https://issues.apache.org/jira/browse/ARROW-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570214#comment-17570214
 ] 

Weston Pace commented on ARROW-16692:
-------------------------------------

So the trigger seems to be a large sequence of "empty" files.  Either the files 
are truly empty or (I think) it could be that a pushdown filter of some kind 
eliminated all of the rows in the file.  This seems to align with [~jonkeane]'s 
reproducer, especially the "One thing that might be important is: 
pickup_location_id is all NAs | nulls in the first 8 years of the data or so." 
part.

The merge generator logic roughly boils down to...

{code}
def get_next_batch():
  if current_file is None:
    current_file = get_next_file()
  return get_next_batch_from_file(current_file)

def get_next_batch_from_file(file):
  batch = file.read_batch()
  if not batch:
    current_file = None
    return get_next_batch()
  return batch
{code}

The new code looks something like...

{code}
def get_next_batch():
  while True:
    if current_file is None:
      current_file = get_next_file()
    batch = get_next_batch_from_file(current_file)
    if batch:
      return batch

def get_next_batch_from_file(file):
  batch = file.read_batch()
  if not batch:
    current_file = None
    return None
  return batch
{code}

However, because this is all async, the actual code change looks significantly 
messier.  Sometimes we call {{get_next_batch_from_file}} directly instead of 
going through {{get_next_batch}} and so we need a new flag to distinguish 
between the two cases:

{code}
def get_next_batch_from_file(file, recursive):
  batch = file.read_batch()
  if not batch:
    current_file = None
    if recursive:
      return None
    return get_next_batch()
  return batch
{code}


> [C++] StackOverflow in merge generator causes segmentation fault in scan
> ------------------------------------------------------------------------
>
>                 Key: ARROW-16692
>                 URL: https://issues.apache.org/jira/browse/ARROW-16692
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: C++
>            Reporter: Jonathan Keane
>            Assignee: Weston Pace
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 9.0.0
>
>         Attachments: backtrace.txt
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> I'm still working to make a minimal reproducer for this, though I can 
> reliably reproduce it below (though that means needing to download a bunch of 
> data first...). I've cleaned out much of the unnecessary code (so this query 
> below is a bit silly, and not what I'm actually trying to do), but haven't 
> been able to make a constructed dataset that reproduces this.
> Working on some example with the new | more cleaned taxi dataset at 
> {{s3://ursa-labs-taxi-data-v2}}, I've run into a segfault:
> {code}
> library(arrow)
> library(dplyr)
> ds <- open_dataset("path/to/new_taxi/")
> ds %>%
>   filter(!is.na(pickup_location_id)) %>%
>   summarise(n = n()) %>% collect()
> {code}
> Most of the time ends in a segfault (though I have gotten it to work on 
> occasion). I've tried with smaller files | constructed datasets and haven't 
> been able to replicate it yet. One thing that might be important is:  
> {{pickup_location_id}} is all NAs | nulls in the first 8 years of the data or 
> so.
> I've attached a backtrace in case that's enough to see what's going on here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to