wjones127 commented on issue #12653:
URL: https://github.com/apache/arrow/issues/12653#issuecomment-1070973156
Hi @eitsupi ,
Depending on your memory restrictions, you may need to control the batch
size (how many rows are loaded at once) on the scanner:
```python
import pyarrow.dataset as ds
input_dataset = ds.dataset("input")
scanner = inpute_dataset.scanner(batch_size=100_000) # default is 1_000_000
ds.write_dataset(scanner.to_reader(), "output", format="parquet")
```
Does that help in your use case?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]