tomaslink commented on issue #38017:
URL: https://github.com/apache/beam/issues/38017#issuecomment-4192700154

   > I think you get "Failed to copy Non partitioned table to Column 
partitioned table: not supported" when FILE_LOADS tries to load a large volume 
of data into your DAY-partitioned BigQuery table via additional copy to temp 
non-partitioned table. And this is probably what cases the error (on small data 
size no need to copy to temp non-partitioned table). So assuming you wan to 
keep using FILE_LOADS you would rather change triggering_frequency to lower 
value (sorry cannot give you right value, might need to play with it).
   > 
   > One more alternative is to set withNumFileShards() - forcing it lower, 
maybe can help to keep direct copy. But I would not pursue this option - 
requires even more tuning to get good results.
   > 
   > Finally please keep in mind that STORAGE_WRITE_API is the recommended way 
(while in some cases it is less cost effective, yet it is more reliable)
   
   @apanich Thanks!
   
   Yes, I know `STORAGE_WRITE_API` is the recommended way, and it works well 
for me. But it is too expensive for our use case. 
   I will try to play with those parameters. Do you think that controlling what 
goes into the WriteToBigQuery PTransform can also help? Something like 
controlling batching of the PCollection? 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to