Wes McKinney commented on ARROW-2227:

This seems to be an off-by-one-error. In builder.cc, we are comparing with 
{{INT32_MAX - 1}}, in python/numpy_to_arrow.cc we are comparing with 
{{INT32_MAX}}. I made this a blocker for 0.9.0 as I think we can fix this by 
changing the bound in numpy_to_arrow.cc. Getting late here tonight so I will 
try to fix tomorrow morning before we cut an RC -- the test case here is pretty 
swappy, we should be able to construct a better test case with some very large 
strings followed by some length-1 strings to hit the edge case at INT32_MAX

> [Python] Table.from_pandas does not create chunked_arrays.
> ----------------------------------------------------------
>                 Key: ARROW-2227
>                 URL: https://issues.apache.org/jira/browse/ARROW-2227
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.8.0
>            Reporter: Chris Ellison
>            Assignee: Wes McKinney
>            Priority: Blocker
>             Fix For: 0.9.0
> When creating a large enough array, pyarrow raises an exception:
> {code:java}
> import numpy as np
> import pandas as pd
> import pyarrow as pa
> x = list('1' * 2**31)
> y = pd.DataFrame({'x': x})
> t = pa.Table.from_pandas(y)
> # ArrowInvalid: BinaryArrow cannot contain more than 2147483646 bytes, have 
> 2147483647{code}
> The array should be chunked for the user. As is, data frames with >2 GiB in 
> binary data will struggle to get into arrow.

This message was sent by Atlassian JIRA

Reply via email to