Yes, just as Ismaël said it's a compilation blocker right now despite that
(I believe) we don't use the extension that's breaking.
As for other ways to solve this, if there is a way to avoid compiling the
advanced features of AutoValue that might be worth a try. We could also try
to get a release
The current issue is that compilation fails on master because beam's
parent pom is configured to fail if it finds warnings):
-Werror
However if you remove that line from the parent pom the compilation passes.
Of course this does not mean that everything is solved for Java 9,
there are some
AFAIK we don't use any advanced capabilities of AutoValue. Does that mean
this issue is moot? I didn't quite understand from your email whether it is
a compilation blocker for Beam or not.
On Tue, Sep 26, 2017 at 2:32 PM Ismaël Mejía wrote:
> Great that you are also working
Great that you are also working on this too Daniel and thanks for
bringing this subject to the mailing list, I was waiting to my return
to office next week, but you did it first :)
Eugene for reference (This is the issue on the migration to Java 9),
notice that here the goal is first that beam
So I've been working on JDK 9 support for Beam, and I have a bug in
AutoValue that can be fixed by updating our AutoValue dependency to the
latest. The problem is that AutoValue from 1.5+ seems to be banned for Beam
due to requiring Java 8 compilers. However, it should still be possible to
compile
This is the same as the issue with Create. Inferring a coder based on the
class of values is fragile, because coders are invariant.
PCollection input = ...
Key k = ...;
PCollection> pc = input.apply(WithKeys.of((Object)
subclassOfKey))
// a PCollection with a Coder
It is usually better to create a single pipeline since you will have better
load balancing of work across your different tables and I would expect that
the pipeline would finish sooner vs waiting for all the pipelines to finish.
Also, different runners will be able to support different pipeline
When I glanced before, this was due to having to create many separate load
jobs - one for each partition. I'm not sure if there's anything Beam can do
here. I believe there may be some upcoming features in BigQuery that make
this better.
Reuvenn
On Tue, Sep 26, 2017 at 6:57 AM, Chaim Turkel
What do you mean by Beam partitions?
On Tue, Sep 26, 2017, 6:57 AM Chaim Turkel wrote:
> by the way currently the performance on bigquery partitions is very bad.
> Is there a repository where i can test with 2.2.0?
>
> chaim
>
> On Tue, Sep 26, 2017 at 4:52 PM, Reuven Lax
by the way currently the performance on bigquery partitions is very bad.
Is there a repository where i can test with 2.2.0?
chaim
On Tue, Sep 26, 2017 at 4:52 PM, Reuven Lax wrote:
> Do you mean BigQuery partitions? Yes, however 2.1.0 has a bug if the table
>
Hi,
I am transforming multiple tables from mongo to bigquery (about 20)
currently i have one pipeline for each table. Each table is a
collection. Is there a limitation for how many collections i can have?
Would it be better to create multiple pipelines?
chaim
no i mean't beam partitions
On Tue, Sep 26, 2017 at 4:52 PM, Reuven Lax wrote:
> Do you mean BigQuery partitions? Yes, however 2.1.0 has a bug if the table
> containing the partitions is not pre created (fixed in 2.2.0).
>
> On Tue, Sep 26, 2017 at 6:40 AM, Chaim Turkel
Do you mean BigQuery partitions? Yes, however 2.1.0 has a bug if the table
containing the partitions is not pre created (fixed in 2.2.0).
On Tue, Sep 26, 2017 at 6:40 AM, Chaim Turkel wrote:
> Hi,
>
>Does BigQueryIO support Partitions when writing? will it improve my
>
Hi,
Does BigQueryIO support Partitions when writing? will it improve my
performance?
chaim
14 matches
Mail list logo