amol- commented on a change in pull request #10717:
URL: https://github.com/apache/arrow/pull/10717#discussion_r669634388



##########
File path: python/pyarrow/tests/test_ipc.py
##########
@@ -329,6 +329,42 @@ def test_stream_simple_roundtrip(stream_fixture, 
use_legacy_ipc_format):
         reader.read_next_batch()
 
 
+def test_compression_roundtrip():
+    # The ability to set a seed this way is not present on older versions of
+    # numpy (currently in our python 3.6 CI build).  Some inputs might just
+    # happen to compress the same between the two levels so using seeded
+    # random numbers is neccesary
+    if not hasattr(np.random, 'default_rng'):
+        pytest.skip('Requires newer version of numpy')
+    sink = io.BytesIO()
+    rng = np.random.default_rng(seed=42)
+    values = rng.integers(0, 10, 100000)
+    table = pa.Table.from_arrays([values], names=["values"])
+
+    options = pa.ipc.IpcWriteOptions(compression='zstd', compression_level=1)
+    writer = pa.ipc.RecordBatchFileWriter(sink, table.schema, options=options)

Review comment:
       Not really a major thing, but given that the `RecordBatchFileWriter` can 
act as a context manager and sometimes people look at the tests to learn how to 
use things, it might be a good idea to write this in the form of
   
   ```
   with pa.ipc.RecordBatchFileWriter(sink, table.schema, options=options) as 
writer:
      writer.write_table(table)
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to