[ 
https://issues.apache.org/jira/browse/ARROW-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16731467#comment-16731467
 ] 

Matthew Rocklin commented on ARROW-1983:
----------------------------------------

>  I'm planning to move more of the multifile dataset handling into C++ because 
> we also need it in Ruby and R, so would make sense to maintain one 
> implementation for the 3 languages

Makes sense to me.  No pressure, but is there a time in particular when you're 
planning to do this?  This will help me with planning on the Dask side.  I'm 
also happy to help with things on the Python Arrow side near term if they come 
up.  

For context see https://github.com/dask/dask/pull/4336#issuecomment-450686100

> [Python] Add ability to write parquet `_metadata` file
> ------------------------------------------------------
>
>                 Key: ARROW-1983
>                 URL: https://issues.apache.org/jira/browse/ARROW-1983
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: Python
>            Reporter: Jim Crist
>            Assignee: Robert Gruener
>            Priority: Major
>              Labels: beginner, parquet
>             Fix For: 0.13.0
>
>
> Currently {{pyarrow.parquet}} can only write the {{_common_metadata}} file 
> (mostly just schema information). It would be useful to add the ability to 
> write a {{_metadata}} file as well. This should include information about 
> each row group in the dataset, including summary statistics. Having this 
> summary file would allow filtering of row groups without needing to access 
> each file beforehand.
> This would require that the user is able to get the written RowGroups out of 
> a {{pyarrow.parquet.write_table}} call and then give these objects as a list 
> to new function that then passes them on as C++ objects to {{parquet-cpp}} 
> that generates the respective {{_metadata}} file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to