rowillia opened a new issue, #36879:
URL: https://github.com/apache/arrow/issues/36879

   ### Describe the bug, including details regarding any error messages, 
version, and platform.
   
   pyarrow version: 12.0.1
   
   Minimal Repro:
   ```python
   import pyarrow as pa
   
   ROW_COUNT = 15_000
   query = pa.Table.from_pydict({
       "key": [bytes.fromhex('ABC123'), bytes.fromhex('DEF431')]
   })
   t = pa.Table.from_pydict(
       {
           "blob_2": [x.to_bytes(4, 'little') * 25_000 for x in 
range(ROW_COUNT)],
           "blob_1": [x.to_bytes(4, 'little') * 25_000 for x in 
range(ROW_COUNT)],
           "key": [x.to_bytes(4, 'little') for x in range(ROW_COUNT)]
       },
       schema=pa.schema(
           [
               ("blob_2", pa.large_binary()),
               ("blob_1", pa.large_binary()),
               ("key", pa.binary()),
           ]
       )
   )
   print(f"Table size: {t.nbytes} bytes (bigger than INT_MAX: {t.nbytes > (1 << 
31)})")
   for x in range(10):
       print(f"Attempt {x}")
       _ = query.join(t, 'key', join_type="inner")
   
   ```
   
   Executing this results in a segfault:
   ```
   terminate called after throwing an instance of 'std::length_error'
     what():  vector::_M_default_append
   ```
   
   Note that flipping the join (e.g. `_ = t.join(q, 'key', join_type="inner")`) 
no longer segfaults.
   
   ### Component(s)
   
   C++


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to