I downgraded google_cloud_bigdataoss from 2.1.5 back to 2.1.3, which was
recently upgraded [1], and that fixed the issue. It looks like it was
transitively pulling in protobuf 3.13.0, which isn't compatible with java
8(?!??).
[1]
yeah, I built it via:
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64 ./gradlew --no-daemon
-Ppublishing -PnoSigning publishMavenJavaPublicationToMavenLocal
For me java8 is also my default
On Fri, Nov 6, 2020 at 6:25 PM Kyle Weaver wrote:
> Do you have JAVA_HOME set? (possibly related:
>
Do you have JAVA_HOME set? (possibly related:
https://issues.apache.org/jira/browse/BEAM-11080)
On Fri, Nov 6, 2020 at 3:13 PM Steve Niemitz wrote:
> I'm trying out 2.25 (built from source, using java 8), and running into
> this error, both on the direct runner and dataflow:
>
> Caused by:
I'm trying out 2.25 (built from source, using java 8), and running into
this error, both on the direct runner and dataflow:
Caused by: java.lang.NoSuchMethodError:
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at com.google.protobuf.NioByteString.copyToInternal(NioByteString.java:112)
at
Thanks for the references Rui. I think it is worth to consider how
open source systems do it.
The great thing about this is that we could 'easily' map Piotr's work
for Bigtable to HBase too once it is done.
On Fri, Nov 6, 2020 at 8:22 PM Rui Wang wrote:
>
> Another two references are from how
The report is useful for awareness, the issue is that we cannot
systematically update these dependencies so this diminishes the value of
the report.
I don't know if we can eventually filter some things of the report or
better to create a section for 'sensitive' dependencies that we cannot
update
Thank you Jeff for the quick reply. Does this mean, if a stateful job using the
old coder wants to start using timer family id, then it needs to discard all
its state first before it can switch to use V2 coder to have timer family id
support?
Best,
Ke
> On Nov 6, 2020, at 11:27 AM, Jeff
Ke - You are correct that generally data encoded with a previous coder
version cannot be read with an updated coder. The formats have to match
exactly.
As far as I'm aware, it's necessary to flush a job and start with fresh
state in order to upgrade coders.
On Fri, Nov 6, 2020 at 2:13 PM Ke Wu
Another two references are from how Flink and Spark uses HBase by SQL:
https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connectors/hbase.html
https://stackoverflow.com/questions/39530938/sparksql-on-hbase-tables
-Rui
On Thu, Nov 5, 2020 at 9:46 AM Piotr Szuberski
wrote:
>
Hello,
I found that TimerDataCoderV2 is created to include timer family id and output
timestamps fields in TimerData. In addition, the new fields are encoded between
old fields, which I suppose V2 coder cannot decode and data that is encoded by
V1 coder and vice versus. My ask here is, how
Thank you, Alexey!
> On Nov 6, 2020, at 5:58 AM, Alexey Romanenko wrote:
>
> Done, I added you to contributors list.
>
> Welcome!
>
> Please, take a look on Beam Contribution Guide if not yet =)
> https://beam.apache.org/contribute/
>
> Alexey
>
>> On 5 Nov 2020, at 20:10, Ke Wu wrote:
>>
Feel free to triage some reviews to me Pablo :)
For larger questions about dividing up contributions and code architecture
questions (items 2 and 3) - I think dev@ threads or the slack channel would
be a good place to discuss anything that can't happen in a GitHub review.
It's good for such
Done, I added you to contributors list.
Welcome!
Please, take a look on Beam Contribution Guide if not yet =)
https://beam.apache.org/contribute/
Alexey
> On 5 Nov 2020, at 20:10, Ke Wu wrote:
>
> Absolutely, my jira username is kw2542
>
> Thanks,
> Ke
>
>> On Nov 5, 2020, at 2:47 AM,
13 matches
Mail list logo