TheNeuralBit commented on a change in pull request #14586:
URL: https://github.com/apache/beam/pull/14586#discussion_r645147684



##########
File path: 
sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIOStorageReadTest.java
##########
@@ -1351,4 +1353,20 @@ public void testReadFromBigQueryIOWithTrimmedSchema() 
throws Exception {
 
     p.run();
   }
+
+  private static org.apache.arrow.vector.types.pojo.Field field(

Review comment:
       Oh shoot sorry about that, I completely missed that there is a separate 
BigQuery `ArrowRecordBatch`. I think you're proposed approach is mostly 
correct, but rather than parsing the serialized record batch, you'll want to 
make a builder and set the appropriate values, (e.g. with 
[setSerializedRecordBatch](https://googleapis.dev/java/google-cloud-bigquerystorage/1.8.3/com/google/cloud/bigquery/storage/v1/ArrowRecordBatch.Builder.html#setSerializedRecordBatch-com.google.protobuf.ByteString-)):
   
   ```java
   ArrowRecordBatch bigqueryBatch = ArrowRecordBatch.newBuilder()
       .setRowCount(..)
       .setSerializedRecordBatch(serializedBytes)
       .build()
   ```
   
   > Is it correct for the Arrow code to use 
`org.apache.arrow.vector.ipc.message.ArrowRecordBatch` instead of 
`com.google.cloud.bigquery.storage.v1.ArrowRecordBatch`?
   
   Yes this is correct. The bigquery ArrowRecordBatch is only relevant for 
BigQueryIO, while `ArrowConversion` should be more general purpose (there may 
be other IOs that produce arrow data in the future).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to