[ 
https://issues.apache.org/jira/browse/ARROW-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francois Saint-Jacques resolved ARROW-5966.
-------------------------------------------
    Resolution: Fixed

Issue resolved by pull request 5122
[https://github.com/apache/arrow/pull/5122]

> [Python] Capacity error when converting large UTF32 numpy array to arrow array
> ------------------------------------------------------------------------------
>
>                 Key: ARROW-5966
>                 URL: https://issues.apache.org/jira/browse/ARROW-5966
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.13.0, 0.14.0
>            Reporter: Igor Yastrebov
>            Assignee: Wes McKinney
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.15.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Trying to create a large string array fails with 
> ArrowCapacityError: Encoded string length exceeds maximum size (2GB)
> instead of creating a chunked array.
>  
> A reproducible example:
> {code:java}
> import uuid
> import numpy as np
> import pyarrow as pa
> li = []
> for i in range(100000000):
>     li.append(uuid.uuid4().hex)
> arr = np.array(li)
> parr = pa.array(arr)
> {code}
> Is it a regression or was it never properly fixed: 
> [https://github.com/apache/arrow/issues/1855]?
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to