[ 
https://issues.apache.org/jira/browse/ARROW-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16112378#comment-16112378
 ] 

Uwe L. Korn commented on ARROW-1311:
------------------------------------

Two things I see in this BT: 

* The size change is quite small: {{size=2097216, oldsize=2097152}}
* We're in the are of 2GiB of allocated memory and want to expand the region by 
a some page, not a full re-allocation.

[~K94] It would be nice if you could try to run into the problematic situation 
again and get a more detailed traceback with {{thread apply all bt full}} 
instead of {{bt}} in {{gdb}}. This would help me better understand the problem. 
I sadly yet fail reproduce locally.

> python hangs after write a few parquet tables
> ---------------------------------------------
>
>                 Key: ARROW-1311
>                 URL: https://issues.apache.org/jira/browse/ARROW-1311
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.5.0
>         Environment: Python 3.5.2, pyarrow 0.5.0
>            Reporter: Keith Curtis
>            Assignee: Wes McKinney
>             Fix For: 0.6.0
>
>         Attachments: backtrace.txt
>
>
> I had a program to read some csv files (a few million rows each, 9 columns), 
> and converted with:
> ```python
> import os
> import pandas as pd
> import pyarrow.parquet as pq
> import pyarrow
> def to_parquet(output_file, csv_file):
>     df = pd.read_csv(csv_file)
>     table = pyarrow.Table.from_pandas(df)
>     pq.write_table(table, output_file)
> ```
> The first csv file would always complete, but python would hang on the second 
> or third file, and sometimes on a much later file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to