[ https://issues.apache.org/jira/browse/ARROW-2227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16395334#comment-16395334 ]
Antoine Pitrou edited comment on ARROW-2227 at 3/12/18 2:47 PM: ---------------------------------------------------------------- {quote}Just wanted to mention, in case it was missed, but this example isn't a single large 2 GiB string. Each row in the data frame is a single byte. So it is a large array of small bytes. {quote} Oh, I see. I had misread the example (and my crash is on a different use case then). It's quite a weird way of storing binary strings, though? Your column is a column of Python objects, which under the hood appear to be numpy.int64 objects... So you're paying a huge overhead because of all those objects. (to put in perspective, I have 16 GB RAM, but creating your dataframe swaps out...) was (Author: pitrou): {quote}Just wanted to mention, in case it was missed, but this example isn't a single large 2 GiB string. Each row in the data frame is a single byte. So it is a large array of small bytes. {quote} Oh, I see. I had misread the example (and my crash is on a different use case then). It's quite a weird way of storing binary strings, though? Your column is a column of Python objects, which under the hood appear to be numpy.int64 objects... So you're paying a huge overhead because of all those objects. > [Python] Table.from_pandas does not create chunked_arrays. > ---------------------------------------------------------- > > Key: ARROW-2227 > URL: https://issues.apache.org/jira/browse/ARROW-2227 > Project: Apache Arrow > Issue Type: Bug > Components: Python > Affects Versions: 0.8.0 > Reporter: Chris Ellison > Assignee: Wes McKinney > Priority: Major > Fix For: 0.10.0 > > > When creating a large enough array, pyarrow raises an exception: > {code:java} > import numpy as np > import pandas as pd > import pyarrow as pa > x = list('1' * 2**31) > y = pd.DataFrame({'x': x}) > t = pa.Table.from_pandas(y) > # ArrowInvalid: BinaryArrow cannot contain more than 2147483646 bytes, have > 2147483647{code} > The array should be chunked for the user. As is, data frames with >2 GiB in > binary data will struggle to get into arrow. -- This message was sent by Atlassian JIRA (v7.6.3#76005)