Hello,

I see that there are two implementations of reading and writing from
Bigtable, one in beam and one that is references in Google cloud
documentation. Is one preferred over the other? We often use the Beam
BigtableIO to write to bigtable but I have found that sometimes the default
configuration can lead to a lot of write requests (which can lead to having
more nodes as well it seems, more cost associated). I am about to try
messing around with the bulk options to see if that can raise the batching
of mutations, but is there anything else I should try, like switching the
actual transform we use?

Thanks,

Sahith

Reply via email to