Hi Niklas,
Regarding `Exception in thread "grpc-nio-worker-ELG-3-2"
java.lang.NoClassDefFoundError:
org/apache/beam/vendor/grpc/v1p26p0/io/netty/buffer/PoolArena$1`,
it does not affect the correctness of the result. The reason is that some
resources are released asynchronously when Grpc Server is
Essentially, we started with this behavior, and kept it to not break
existing Flink setups.
You can disable the filtering of label values with the
filterLabelValueCharacters setting.
https://ci.apache.org/projects/flink/flink-docs-release-1.11/monitoring/metrics.html#prometheus-orgapacheflinkm
Hello,
I'm trying to expose version string as Prometheus user variable:
runtimeContext.getMetricGroup()
.addGroup("version", "0.0.3-beta.399+gf5b79ac")
.gauge("buildInfo", () -> 0);
the problem that for some reason label value 0.0.3-beta.399+gf5b79ac is
converted into 0_0_3_beta_
Hi Timo,
Thanks for your information. I saw the Flink SQL can actually do the full
outer join in the test code with interval join semantic. However, this is
not explicitly shown in the Flink SQL documentation. That makes me thinking
this might not be available for me to use. Maybe the doc could be
Hi Patrick,
at the moment it is not possible to disconnect side outputs from other
streaming operators. I guess what you would like to have is an operator
which consumes on a best effort basis but which can also lose some data
while it is being restarted. This is currently not supported by Flink.
Thanks for Danny and Dawid's quick reply.
Dawid, I find your fix at
https://github.com/apache/flink/pull/14085/commits/bc9caab71f51024d1b48c6ee1a3f79777624b6bb#diff-6cc72acf0893bbaadc5b610afbbbae23227971cf5c7d0743dd4b997236baf771R450.
Appreciate the fix.
But we may not move to Flink 1.12 in the
Hi,
Just wanted to comment on:
How to map the nullable types to union(null, something)? In our schema
definition, we follow the Avro recommended definition, list 'null' as
the first type.
I've also spotted that problem and it will be fixed in 1.12 in
https://issues.apache.org/jira/browse/FLINK-2
SQL does not support that now. But i think your request is reasonable.
AFAIK . SQL hints may be a way to configure such a per-operator thing.
Would fire an issue first to see if we have an solution for the midterm.
Kevin Kwon 于2020年11月25日 周三下午5:06写道:
> I just want the source and sink operator com
For your question 1. This does not work as expected. I would check it soon
to see if it is a bug and fire a fix.
Hongjian Peng 于2020年11月25日 周三下午4:45写道:
> In Flink 1.10, we can pass this schema with 'format.avro-schema' property
> to SQL DDL, but in Flink 1.11, the Avro schema is always derived fr
Hi Hongjian Peng ~
For your question 1, it is not work as expected. If it is true, there is
definitely a bug. I would check and fix it later.
For your question 2, yes. This is an intent design. There is a routine in
the type inference: all the fields of a nullable struct type should also be
nulla
Hi Tomasz
The API fromCollection() would record the number of elements emitted [1] in
snapshot state, and restore them to remember as elements to skip [2], that is
to say not all elements would be read again.
But frankly speaking, once fromCollection is completed, the source task would
finish
Hi,
What is the behaviour of fromCollection() when restarting from a savepoint?
Will the elements of the collection get fed into the resulting stream again or
not?
What if the contents of the underlying collection change?
Thanks,
Tomasz
Tomasz Dudziak | Marshall Wace LLP, George House, 131 Sloa
Hi Till,
Thanks for your reply.
Is there any option to disconnect the side outputs from the pipelined data
exchanges of the main stream.
The benefit of side outputs is very high regarding performance and useability
plus it fits the use case here very nicely. Though this pipelined connection to
Thank you all for the clarification..now things are much more clear. I hope
this discussion could be of clarification for other people having the same
doubts.
Best,
Flavio
On Wed, Nov 25, 2020 at 10:27 AM Aljoscha Krettek
wrote:
> One bigger problem here is that the code base is very inconsiste
One bigger problem here is that the code base is very inconsistent when
it comes to the @Public//@Internal annotations. Initially, only the
packages that were meant to be "public" had them. For example flink-core
has thorough annotations. Packages that were not meant to have any
user-facing cod
In Flink 1.10, we can pass this schema with 'format.avro-schema' property to
SQL DDL, but in Flink 1.11, the Avro schema is always derived from the table
schema.
We have two questions about the Flink 1.11 Avro format:
1. Flink 1.11 maps nullable types to Avro union(something, null). How to map
Hi Roman and Robert,
Thank you.
I have checked the code and the checkpoint deleting failure case. Yes,
Flink will delete the meta file and operator state file at first, then
delete the checkpoint dir which is truly an empty dir. The root cause of
the failure of deleting checkpoint is the hadoop de
Hi Flink Community,
We are trying to upgrade our Flink SQL job from 1.10 to 1.11. We used Kafka
source table, and the data is stored in Kafka in Avro format.
Schema is like this:
{
"type": "record",
"name": "event",
"namespace": "busseniss.event",
"fields": [
{
"name": "heade
18 matches
Mail list logo