[ 
https://issues.apache.org/jira/browse/ARROW-8964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17118981#comment-17118981
 ] 

Joris Van den Bossche commented on ARROW-8964:
----------------------------------------------

What you are describing should normally already be implemented (so something 
else should be going wrong for some reason).  
When multiple files with different schemas are read in a dataset, the dataset 
discovery uses a very basic "schema evoluation / normalization", which right 
now only involves adding missing columns as "null" values (so exactly the use 
case you are describing, I think). In the future we also want to allow some 
type evoluation (like the same columns in different files with int32 and int64 
type, which right now would raise an error).

Can you show an example of just reading one of the old and one of the new files 
in the directory (you can pass the exact file name instead of the directory to 
{{dataset(..)}}) ?

> Pyarrow: improve reading of partitioned parquet datasets whose schema changed
> -----------------------------------------------------------------------------
>
>                 Key: ARROW-8964
>                 URL: https://issues.apache.org/jira/browse/ARROW-8964
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: Python
>    Affects Versions: 0.17.1
>         Environment: Ubuntu 18.04, latest miniconda with python 3.7, pyarrow 
> 0.17.1
>            Reporter: Ira Saktor
>            Priority: Major
>
> Hi there, i'm encountering the following issue when reading from HDFS:
>  
> *My situation:*
> I have a paritioned parquet dataset in HDFS, whose recent partitions contain 
> parquet files with more columns than the older ones. When i try to read data 
> using pyarrow.dataset.dataset and filter on recent data, i still get only the 
> columns that are also contained in the old parquet files. I'd like to somehow 
> merge the schema or use the schema from parquet files from which data ends up 
> being loaded.
> *when using:*
> `pyarrow.dataset.dataset(path_to_hdfs_directory, paritioning = 'hive', 
> filters = my_filter_expression).to_table().to_pandas()`
> Is there please a way to handle schema changes in a way, that the read data 
> would contain all columns?
> everything works fine when i copy the needed parquet files into a separate 
> folder, however it is very inconvenient way of working. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to