jorisvandenbossche commented on issue #37989:
URL: https://github.com/apache/arrow/issues/37989#issuecomment-1745488061

   @RizzoV thanks for the report and nice reproducer!
   
   I can reproduce this running your example with memray:
   
   
![newplot(2)](https://github.com/apache/arrow/assets/1020496/b0d1ec9e-550f-4ddc-8b7c-86b782cf9347)
   
   From the memray stats, it looks like the memory being held at the end is 
mostly coming from the list with strings, so somehow the conversion to arrow 
seems to keep those list object alive (haven't yet looked at how that is 
possible, though). 
   And also the pandas metadata conversion (the json dump) seems to accumulate 
memory, although that's a bit strange (but I don't see that in the smaller 
reproducer below). 
   
   It seems it is specifically happens when having a list that is nested inside 
another column (eg struct of list), so I can reproduce the observation as well 
with this simplified example:
   
   ```python
   import string
   from random import choice
   
   import pandas as pd
   import pyarrow as pa
   
   
   sample_schema = pa.struct(
       [
           ( "a", pa.struct([("aa", pa.list_(pa.string()))])),
       ]
   )
   
   
   def generate_random_string(str_length: int) -> str:
       return "".join(
           [choice(string.ascii_lowercase + string.digits) for n in 
range(str_length)]
       )
   
   
   def generate_random_data():
       return {
           "a": [{"aa": [generate_random_string(128) for i in range(50)]}],
       }
   
   
   def main():
       for i in range(10000):
           df = pd.DataFrame.from_dict(generate_random_data())
           # pa.jemalloc_set_decay_ms(0)
           table = pa.Table.from_pandas(df, schema=pa.schema(sample_schema))
   
   
   if __name__ == "__main__":
       main()
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to