I have one flink job which reads files from s3 and processes them.
Currently, it is running on flink 1.9.0, I need to upgrade my cluster to
1.13.5, so I have done the changes in my job pom and brought up the flink
cluster using 1.13.5 dist.
when I submit my application I am getting the below error
Currently, the flink-kubernetes-operator is using Flink native K8s
integration[1], which means Flink ResourceManager will dynamically allocate
TaskManager on demand.
So the users do not need to specify the replicas of TaskManager.
Just like Gyula said, one possible solution to make "kubectl scale"
Most of the open source communities I know have set up their slack
channels, such as Apache Iceberg [1], Apache Druid [2], etc.
So I think slack can be worth trying.
David is right, there are some cases that need to communicate back and
forth, slack communication will be more effective.
But back
Hi Jay!
Interesting question/proposal to add the scale-subresource.
I am not an expert on this area but we will look into this a little and
give you some feedback and see if we can incorporate something into the
upcoming release if it makes sense.
On a high level there is not a single replicas v
Hi Team,
I have been experimenting the Flink Kubernetes operator. One of the biggest
miss that we have is it does not support scale sub resource as of now to
support reactive scaling. Without that commercially it becomes very difficult
for products like us who have very varied loads for every
I've always used Scala in the context of Flink. Now that Flink 1.15 has
become Scala-free, I wonder what is the best (most practical) route for me
moving forward. These are my options:
1. Keep using Scala 2.12 for the years to come (and upgrade to newer
versions when the community has come up with
I have mixed feelings about this.
I have been rather visible on stack overflow, and as a result I get a lot
of DMs asking for help. I enjoy helping, but want to do it on a platform
where the responses can be searched and shared.
It is currently the case that good questions on stack overflow frequ
Thanks for the proposal, Xintong.
While I share the same concerns as those mentioned in the previous
discussion thread, admittedly there are benefits of having a slack channel
as a supplementary way to discuss Flink. The fact that this topic is raised
once a while indicates lasting interests.
Per
Hi everyone,
While I see Slack having a major downside (the results are not indexed by
external search engines, you can't link directly to Slack content unless
you've signed up), I do think that the open source space has progressed and
that Slack is considered as something that's invaluable to use
Hi Xintong,
I'm not sure if slack is the right tool for the job. IMO it works great as
an adhoc tool for discussion between developers, but it's not searchable
and it's not persistent. Between devs, it works fine, as long as the result
of the ad hoc discussions is backported to JIRA/mailing list/d
Hi,
I would disagree:
In the case of spark, it is a streaming application that is offering full
streaming semantics (but with less cost and bigger latency) as it triggers
less often. In particular, windowing and stateful semantics as well as
late-arriving data are handled automatically using the r
Hi Georg,
Flink batch applications run until all their input is processed. When
that's the case, the application finishes. You can read more about this in
the documentation for DataStream [1] or Table API [2]. I think this matches
the same as Spark is explaining in the documentation.
Best regards
Hi Joost,
I'm looping in Leonard and Jark who might be able to help out here.
Best regards,
Martijn
On Mon, 2 May 2022 at 16:01, Joost Molenaar wrote:
> Hello all,
>
> I'm trying to use Flink-SQL to monitor a Kafka topic that's populated by
> Debezium, which is in turn monitoring a MS-SQL CDC
Hi everyone,
I would like to send out a final reminder. We have already received some
great submissions for FlinkForward San Francisco 2022. Nevertheless, we
decided to extend the deadline by another week to give people a second
chance to work on their abstracts and presentation ideas.
This
Hi Timo,
Please find the example:
@DataTypeHint(value = "RAW", bridgedTo = JsonObject.class)
public Object eval(String jsonString) {
//Logic to parse
}
Thanks and Regards ,
Surendra Lalwani
On Fri, May 6, 2022 at 3:13 PM Timo Walther wrote:
> Can you show the full example? It looks like t
Can you show the full example? It looks like there is still a JSONObject
without a @DataTypeHint next to it.
Am 06.05.22 um 11:18 schrieb Surendra Lalwani:
Hi Timo,
I tried this but still getting error:
Caused by: org.apache.flink.table.api.ValidationException: Could not
extract a data typ
Hi Timo,
I tried this but still getting error:
Caused by: org.apache.flink.table.api.ValidationException: Could not
extract a data type from 'class net.minidev.json.JSONObject'. Interpreting
it as a structured type was also not successful.at
org.apache.flink.table.types.extraction.Extract
Hi Surendra,
in general we would like to encourage users to use the SQL type system
classes instead of RAW types. Otherwise they are simply black boxes in
the SQL engine. A STRING or ROW type might be more appropriate.
You can use
@DataTypeHint(value = "RAW") // defaults to Object.class
@D
Thank you~
Xintong Song
-- Forwarded message -
From: Xintong Song
Date: Fri, May 6, 2022 at 5:07 PM
Subject: Re: [Discuss] Creating an Apache Flink slack workspace
To: private
Cc: Chesnay Schepler
Hi Chesnay,
Correct me if I'm wrong, I don't find this is *repeatedly* discu
Does the DatatypeHint with bridgedTo can meet your requirements?
For example:
'
public @DataTypeHint(
value = "RAW",
bridgedTo = JSONObject .class,
rawSerializer = JSONObject Serializer.class) JSONObject eval(String str) {
return JSONObject .parse(str);
}
'
You may need to provide a class
Hi Team,
I am using Flink 1.13.6 and I have created a UDF and I want to return
JSONObject from that UDF or basically an Object but it doesn't seems to
work as there is no datatype hint compatible to Object. in earlier flink
versions when DataTypeHint wasn't there, it used to work. Any help would b
21 matches
Mail list logo