Chris Ellison created ARROW-2242:
------------------------------------

             Summary: [Python] ParquetFile.read does not accommodate large 
binary data 
                 Key: ARROW-2242
                 URL: https://issues.apache.org/jira/browse/ARROW-2242
             Project: Apache Arrow
          Issue Type: Bug
          Components: Python
    Affects Versions: 0.8.0
            Reporter: Chris Ellison
             Fix For: 0.9.0


When reading a parquet file with binary data > 2 GiB, we get an ArrowIOError 
due to it not creating chunked arrays. Reading each row group individually and 
then concatenating the tables works, however.

 
{code:java}
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq


x = pa.array(list('1' * 2**30))

demo = 'demo.parquet'


def scenario():
    t = pa.Table.from_arrays([x], ['x'])
    writer = pq.ParquetWriter(demo, t.schema)
    for i in range(2):
        writer.write_table(t)
    writer.close()

    pf = pq.ParquetFile(demo)

    # pyarrow.lib.ArrowIOError: Arrow error: Invalid: BinaryArray cannot 
contain more than 2147483646 bytes, have 2147483647
    t2 = pf.read()

    # Works, but note, there are 32 row groups, not 2 as suggested by:
    # 
https://arrow.apache.org/docs/python/parquet.html#finer-grained-reading-and-writing

    #tables = [pf.read_row_group(i) for i in range(pf.num_row_groups)]
    #t3 = pa.concat_tables(tables)

scenario()
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to