[ 
https://issues.apache.org/jira/browse/BEAM-8933?focusedWorklogId=609947&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-609947
 ]

ASF GitHub Bot logged work on BEAM-8933:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 10/Jun/21 21:24
            Start Date: 10/Jun/21 21:24
    Worklog Time Spent: 10m 
      Work Description: MiguelAnzoWizeline commented on a change in pull 
request #14586:
URL: https://github.com/apache/beam/pull/14586#discussion_r649542082



##########
File path: 
sdks/java/io/google-cloud-platform/src/test/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIOStorageReadTest.java
##########
@@ -1351,4 +1353,20 @@ public void testReadFromBigQueryIOWithTrimmedSchema() 
throws Exception {
 
     p.run();
   }
+
+  private static org.apache.arrow.vector.types.pojo.Field field(

Review comment:
       Hi @TheNeuralBit I'm having some problems writing the tests for the 
Arrow Read, during deserialization of the `ArrowRecordBatch` I'm getting an 
error `Expected RecordBatch but header was 0`
   
   `at 
org.apache.arrow.vector.ipc.message.MessageSerializer.deserializeRecordBatch(MessageSerializer.java:360)`
   `at 
org.apache.beam.sdk.extensions.arrow.ArrowConversion.rowFromSerializedRecordBatch(ArrowConversion.java:260)`
   
   I think the problem is related at how I serialize the ArrowRecordBatch in 
the test or how it is getting deserialized in the ArrowConversion, the error 
specifically makes me believe that the format is getting lost when converting 
to and from the `RecordBatch` in the Arrow library and the one in the bigquery 
library, but I'm really not that Knowledgeable of Arrow so I'm getting a little 
lost finding a solution.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 609947)
    Time Spent: 49h 20m  (was: 49h 10m)

> BigQuery IO should support reading Arrow format over Storage API
> ----------------------------------------------------------------
>
>                 Key: BEAM-8933
>                 URL: https://issues.apache.org/jira/browse/BEAM-8933
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-gcp
>            Reporter: Kirill Kozlov
>            Assignee: Miguel Anzo
>            Priority: P3
>          Time Spent: 49h 20m
>  Remaining Estimate: 0h
>
> As of right now BigQuery uses Avro format for reading and writing.
> We should add a config to BigQueryIO to specify which format to use: Arrow or 
> Avro (with Avro as default).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to