[
https://issues.apache.org/jira/browse/ARROW-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887730#comment-16887730
]
Igor Yastrebov commented on ARROW-5966:
---------------------------------------
Yes, using pa.array() on a list works and creating np.array(dtype='bytes_')
also works.
> [Python] Capacity error when converting large UTF32 numpy array to arrow array
> ------------------------------------------------------------------------------
>
> Key: ARROW-5966
> URL: https://issues.apache.org/jira/browse/ARROW-5966
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.13.0, 0.14.0
> Reporter: Igor Yastrebov
> Priority: Major
>
> Trying to create a large string array fails with
> ArrowCapacityError: Encoded string length exceeds maximum size (2GB)
> instead of creating a chunked array.
>
> A reproducible example:
> {code:java}
> import uuid
> import numpy as np
> import pyarrow as pa
> li = []
> for i in range(100000000):
> li.append(uuid.uuid4().hex)
> arr = np.array(li)
> parr = pa.array(arr)
> {code}
> Is it a regression or was it never properly fixed:
> [https://github.com/apache/arrow/issues/1855]?
>
>
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)