niyue commented on code in PR #13041:
URL: https://github.com/apache/arrow/pull/13041#discussion_r870974565
##########
python/pyarrow/tests/test_flight.py:
##########
@@ -2027,6 +2027,27 @@ def test_large_descriptor():
client.do_exchange(large_descriptor)
+def test_write_batch_custom_metadata():
+ data = pa.Table.from_arrays([
+ pa.array(range(0, 10 * 1024))
+ ], names=["a"])
+ batches = data.to_batches()
+
+ with ExchangeFlightServer() as server, \
+ FlightClient(("localhost", server.port)) as client:
+ descriptor = flight.FlightDescriptor.for_command(b"put")
+ writer, reader = client.do_exchange(descriptor)
+ with writer:
+ writer.begin(data.schema)
+ for i, batch in enumerate(batches):
+ writer.write_batch(batch, {"batch_id": str(i)})
+ writer.done_writing()
+ chunk = reader.read_chunk()
+ assert chunk.data is None
+ expected_buf = str(len(batches)).encode("utf-8")
+ assert chunk.app_metadata == expected_buf
Review Comment:
You are correct here I didn't verify the metadata was received. I really
should point this out earlier.
Initially, I don't want to change the API in arrow flight because it is not
that relevant for this PR, but Cython complained the number of arguments were
incorrect in one of arrow flight's usage after I added an overloaded
`WriteRecordBatch` API to the `CRecordBatchWriter`, so I made a slight change
to the `write_batch` API in arrow flight implementation, and I would like to
write a test case to cover this change but I didn't want to make too much
change about other readers in C++ in this PR so I wasn't able to add more
assertion in this test case.
Now I rolled back the API change to the arrow flight's `write_batch` and
removed this test case completely. I am not very familiar with arrow flight's
API about this part yet and may have to submit other PR later if needed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]