[ 
https://issues.apache.org/jira/browse/BEAM-3683?focusedWorklogId=646491&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646491
 ]

ASF GitHub Bot logged work on BEAM-3683:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 03/Sep/21 22:02
            Start Date: 03/Sep/21 22:02
    Worklog Time Spent: 10m 
      Work Description: reuvenlax commented on pull request #4694:
URL: https://github.com/apache/beam/pull/4694#issuecomment-912833275


   3 year old PR?
   
   writeTableRows is often more expensive because it relies on json objects.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 646491)
    Time Spent: 1h 10m  (was: 1h)

> Support BigQuery column-based time partitioning
> -----------------------------------------------
>
>                 Key: BEAM-3683
>                 URL: https://issues.apache.org/jira/browse/BEAM-3683
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-gcp
>            Reporter: Eugene Kirpichov
>            Assignee: Eugene Kirpichov
>            Priority: P2
>             Fix For: 2.4.0
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> BigQuery now supports tables partitioned by a DATE or TIMESTAMP column. This 
> is very useful for backfilling, because now it doesn't require 1 load job per 
> partition (1 load job for the whole table is fine now), and in case of 
> BigQueryIO.write(), doesn't require using DynamicDestinations - one only 
> needs to specify which field to partition on.
> It is specified via TimePartitioning.field: 
> [https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load]
>  (configuration.load.timePartitioning.field).
> Seems that the only thing that's needed is to update the BigQuery client - 
> then users can use BigQueryIO.write().withTimePartitioning() in some cases 
> where they previously needed to use write().to(DynamicDestinations).
> Plus publicity (e.g. a StackOverflow answer)
> CC: [~reuvenlax] [~chamikara]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to