wjones127 commented on issue #12652:
URL: https://github.com/apache/arrow/issues/12652#issuecomment-1071004210


   @kato1208 The preferred way to solve overflow is to change your schema to 
use types that are appropriate for the data. Look at your first example: The 
`"id"` value starts as `3046682132`, but when converted to Arrow (with 
`safe=False`) comes out as `-1248285164`. I doubt that's what you actually want.
   
   That id value is just too large to fit into an int32. If you change the 
schema to use int64 you can run this without any overflow errors and without 
needing `safe=False`. And it will give you correct data.
   
   ```python
   schema = pa.schema([
       pa.field('name', pa.string()),
       pa.field('id', pa.int64()),
       pa.field("points", pa.list_(pa.int32())),
       pa.field('groups', pa.struct([
           pa.field("group_name", pa.string()),
           pa.field("group_id", pa.int64()),
       ])),
   ])
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to