lidavidm commented on code in PR #47410:
URL: https://github.com/apache/arrow/pull/47410#discussion_r2297807174


##########
cpp/src/arrow/flight/transport_server.cc:
##########
@@ -160,25 +161,63 @@ class TransportMessageReader final : public 
FlightMessageReader {
   std::shared_ptr<Buffer> app_metadata_;
 };
 
-// TODO(ARROW-10787): this should use the same writer/ipc trick as client
+/// \brief An IpcPayloadWriter for ServerDataStream.
+///
+/// To support app_metadata and reuse the existing IPC infrastructure,
+/// this takes a pointer to a buffer to be combined with the IPC
+/// payload when writing a Flight payload.
+class TransportMessagePayloadWriter : public ipc::internal::IpcPayloadWriter {
+ public:
+  TransportMessagePayloadWriter(ServerDataStream* stream,
+                                std::shared_ptr<Buffer>* app_metadata)
+      : stream_(stream), app_metadata_(app_metadata) {}
+
+  Status Start() override { return Status::OK(); }
+  Status WritePayload(const ipc::IpcPayload& ipc_payload) override {
+    FlightPayload payload;
+    payload.ipc_message = ipc_payload;
+
+    if (ipc_payload.type == ipc::MessageType::RECORD_BATCH && *app_metadata_) {
+      payload.app_metadata = std::move(*app_metadata_);
+    }
+    ARROW_ASSIGN_OR_RAISE(auto success, stream_->WriteData(payload));
+    if (!success) {
+      return arrow::Status(arrow::StatusCode::IOError,
+                           "Could not write record batch to stream");
+    }
+    return arrow::Status::OK();
+  }
+  Status Close() override {
+    // Closing is handled one layer up in TransportMessageWriter::Close

Review Comment:
   Well ideally, we would just expose gRPC and then the server could decide 
when it wants to end its output (it can continue reading even after doing so)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to