[ 
https://issues.apache.org/jira/browse/ARROW-6154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Grove reassigned ARROW-6154:
---------------------------------

    Assignee: Andy Grove

> [Rust] Too many open files (os error 24)
> ----------------------------------------
>
>                 Key: ARROW-6154
>                 URL: https://issues.apache.org/jira/browse/ARROW-6154
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Rust
>            Reporter: Yesh
>            Assignee: Andy Grove
>            Priority: Major
>
> Used [rust]*parquet-read binary to read a deeply nested parquet file and see 
> the below stack trace. Unfortunately won't be able to upload file.*
> {code:java}
> stack backtrace:
>    0: std::panicking::default_hook::{{closure}}
>    1: std::panicking::default_hook
>    2: std::panicking::rust_panic_with_hook
>    3: std::panicking::continue_panic_fmt
>    4: rust_begin_unwind
>    5: core::panicking::panic_fmt
>    6: core::result::unwrap_failed
>    7: parquet::util::io::FileSource<R>::new
>    8: <parquet::file::reader::SerializedRowGroupReader<R> as 
> parquet::file::reader::RowGroupReader>::get_column_page_reader
>    9: <parquet::file::reader::SerializedRowGroupReader<R> as 
> parquet::file::reader::RowGroupReader>::get_column_reader
>   10: parquet::record::reader::TreeBuilder::reader_tree
>   11: parquet::record::reader::TreeBuilder::reader_tree
>   12: parquet::record::reader::TreeBuilder::reader_tree
>   13: parquet::record::reader::TreeBuilder::reader_tree
>   14: parquet::record::reader::TreeBuilder::reader_tree
>   15: parquet::record::reader::TreeBuilder::build
>   16: <parquet::record::reader::RowIter as 
> core::iter::traits::iterator::Iterator>::next
>   17: parquet_read::main
>   18: std::rt::lang_start::{{closure}}
>   19: std::panicking::try::do_call
>   20: __rust_maybe_catch_panic
>   21: std::rt::lang_start_internal
>   22: main{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to