[ 
https://issues.apache.org/jira/browse/BEAM-8933?focusedWorklogId=616518&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616518
 ]

ASF GitHub Bot logged work on BEAM-8933:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 29/Jun/21 14:24
            Start Date: 29/Jun/21 14:24
    Worklog Time Spent: 10m 
      Work Description: MiguelAnzoWizeline commented on a change in pull 
request #14586:
URL: https://github.com/apache/beam/pull/14586#discussion_r660670298



##########
File path: 
sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryStorageSourceBase.java
##########
@@ -163,4 +187,23 @@
   public BoundedReader<T> createReader(PipelineOptions options) throws 
IOException {
     throw new UnsupportedOperationException("BigQuery storage source must be 
split before reading");
   }
+
+  /*private static org.apache.arrow.vector.types.pojo.Schema 
convertArrowSchema(
+          ArrowSchema arrowSchema) throws IOException {
+    CodedOutputStream codedOutputStream = CodedOutputStream.newInstance(new 
ByteArrayOutputStream());
+    return org.apache.arrow.vector.types.pojo.Schema.deserialize(bb);
+  }*/

Review comment:
       Done

##########
File path: 
sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryStorageSourceBase.java
##########
@@ -163,4 +187,23 @@
   public BoundedReader<T> createReader(PipelineOptions options) throws 
IOException {
     throw new UnsupportedOperationException("BigQuery storage source must be 
split before reading");
   }
+
+  /*private static org.apache.arrow.vector.types.pojo.Schema 
convertArrowSchema(
+          ArrowSchema arrowSchema) throws IOException {
+    CodedOutputStream codedOutputStream = CodedOutputStream.newInstance(new 
ByteArrayOutputStream());
+    return org.apache.arrow.vector.types.pojo.Schema.deserialize(bb);
+  }*/
+
+  private static ArrowSchema convertArrowSchema(
+      org.apache.arrow.vector.types.pojo.Schema arrowSchema) {
+    ByteArrayOutputStream byteOutputStream = new ByteArrayOutputStream();
+    try {
+      MessageSerializer.serialize(
+          new WriteChannel(Channels.newChannel(byteOutputStream)), 
arrowSchema);
+    } catch (IOException ex) {
+      throw new RuntimeException("Failed to serialize arrow schema.", ex);
+    }
+    ByteString byteString = 
ByteString.copyFrom(byteOutputStream.toByteArray());
+    return ArrowSchema.newBuilder().setSerializedSchema(byteString).build();
+  }

Review comment:
       Done

##########
File path: 
sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryStorageSourceBase.java
##########
@@ -146,7 +159,18 @@
       return ImmutableList.of();
     }
 
-    Schema sessionSchema = new 
Schema.Parser().parse(readSession.getAvroSchema().getSchema());
+    Schema sessionSchema;
+    if (readSession.getDataFormat() == DataFormat.ARROW) {
+      org.apache.arrow.vector.types.pojo.Schema schema =
+          ArrowConversion.arrowSchemaFromInput(
+              readSession.getArrowSchema().getSerializedSchema().newInput());
+      org.apache.beam.sdk.schemas.Schema beamSchema =
+          ArrowConversion.ArrowSchemaTranslator.toBeamSchema(schema);
+      sessionSchema = AvroUtils.toAvroSchema(beamSchema);
+    } else {

Review comment:
       Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 616518)
    Time Spent: 64.5h  (was: 64h 20m)

> BigQuery IO should support reading Arrow format over Storage API
> ----------------------------------------------------------------
>
>                 Key: BEAM-8933
>                 URL: https://issues.apache.org/jira/browse/BEAM-8933
>             Project: Beam
>          Issue Type: Improvement
>          Components: io-java-gcp
>            Reporter: Kirill Kozlov
>            Assignee: Miguel Anzo
>            Priority: P3
>          Time Spent: 64.5h
>  Remaining Estimate: 0h
>
> As of right now BigQuery uses Avro format for reading and writing.
> We should add a config to BigQueryIO to specify which format to use: Arrow or 
> Avro (with Avro as default).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to