Thanks Jark for being our release manager and thanks to everyone who has
contributed.
Cheers,
Till
On Fri, Sep 13, 2019 at 4:12 PM jincheng sun
wrote:
> Thanks for being the release manager and the great work Jark :)
> Also thanks to the community making this release possible!
>
> Best,
> Jinch
Hi,
Given table X with an event time, A, B and C columns, is there a way to
pass a compound key, i.e. A and B as the primaryKey argument of
Table.createTemporalFunction? My attempts so far yield a runtime exception
where the String doesn't match a given regex.
Hi guys,
>From the flink doc
*By default, if a custom partitioner is not specified for the Flink Kafka
Producer, the producer will use a FlinkFixedPartitioner that maps each
Flink Kafka Producer parallel subtask to a single Kafka partition (i.e.,
all records received by a sink subtask will end up i
Given that the segfault happens in the JVM's ZIP stream code, I am curious
is this is a bug in Flink or in the JVM core libs, that happens to be
triggered now by newer versions of FLink.
I found this on StackOverflow, which looks like it could be related:
https://stackoverflow.com/questions/383261
Thanks for being the release manager and the great work Jark :)
Also thanks to the community making this release possible!
Best,
Jincheng
Jark Wu 于2019年9月13日周五 下午10:07写道:
> Hi,
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.8.2, which is the second bugf
Hi,
The Apache Flink community is very happy to announce the release of Apache
Flink 1.8.2, which is the second bugfix release for the Apache Flink 1.8
series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streami
Hi Abhinav,
I think the problem is the following: Flink has been designed so that the
cluster's rest endpoint does not need to run in the same process as the
JobManager. However, currently the rest endpoint is started in the same
process as the JobManagers. Because of the design one needs to annou
Hi Marek,
could you share the logs statements which happened before the SIGSEGV with
us? They might be helpful to understand what happened before. Moreover, it
would be helpful to get access to your custom serializer implementations.
I'm also pulling in Gordon who worked on the TypeSerializerSnaps
Hi Komal,
could you check that every node can reach the other nodes? It looks a
little bit as if the TaskManager cannot talk to the JobManager running on
150.82.218.218:6123.
Cheers,
Till
On Thu, Sep 12, 2019 at 9:30 AM Komal Mariam wrote:
> I managed to fix it however ran into another problem
Hi Harshith,
I'm not an expert of how to setup nginx with authentication for Flink but I
could shed some light on the redirection problem. I assume that Flink's
redirection response might not be properly understood by nginx. The good
news is that with Flink 1.8, we no longer rely on client side re
Hi,
A GROUP BY query on a streaming table requires that the result is
continuously updated.
Updates are propagated as a retraction stream (see
tEnv.toRetractStream(table, Row.class).print(); in your code).
A retraction stream encodes the type of the update as a boolean flag, the
"true" and "false
Thanks for reporting back Catlyn!
Am Do., 12. Sept. 2019 um 19:40 Uhr schrieb Catlyn Kong :
> Turns out there was some other deserialization problem unrelated to this.
>
> On Mon, Sep 9, 2019 at 11:15 AM Catlyn Kong wrote:
>
>> Hi fellow streamers,
>>
>> I'm trying to support avro BYTES type in
I use flink stream sql to write a demo about "group by". The records
are [(bj, 1), (bj, 3), (bj, 5)]. I group by the first element and sum the
second element.
Every time I run the program, the result is different. It seems that
the records are out of order. Even sometimes record is los
13 matches
Mail list logo