To: User-Flink
Subject: Apache Flink - Rest API for num of records in/out
Hi Folks:
I am trying to find if I can get the number of records for an operator using
flinks REST API. I've checked the docs at
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/ops/rest_api
://nightlies.apache.org/flink/flink-docs-release-1.15/docs/ops/rest_api/#jobs-jobid
ah
From: M Singh
Sent: Tuesday, June 7, 2022 4:51 PM
To: User-Flink
Subject: Apache Flink - Rest API for num of records in/out
Hi Folks:
I am trying to find if I can get the number of records for an operator using
flinks
Hi Folks:
I am trying to find if I can get the number of records for an operator using
flinks REST API. I've checked the docs at
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/ops/rest_api/.
I did see some apis that use vertexid, but could not find how to that info
without
job. But includes the jar A. I use Flink rest api to upload a
dummy jar(actually it can be any jar). Flink will create a jar
id. Then I use rest api to start the job with the jar A
entry-class. But the jar id is the dummy jar id. Flink will
start the job from jar
d still be found.
>
> Best,
> Yun
>
> --
> Sender:Qihua Yang
> Date:2022/01/07 02:55:09
> Recipient:user
> Theme:Flink rest api to start a job
>
> Hi,
>
> I found a weird behavior. We launched a k8s clust
. But includes
the jar A. I use Flink rest api to upload a dummy jar(actually it can be any
jar). Flink will create a jar id. Then I use rest api to start the job with the
jar A entry-class. But the jar id is the dummy jar id. Flink will start the job
from jar A. Anyone know why?
My understanding is flink
Hi,
I found a weird behavior. We launched a k8s cluster without job. But
includes the jar A. I use Flink rest api to upload a dummy jar(actually it
can be any jar). Flink will create a jar id. Then I use rest api to start
the job with the jar A entry-class. But the jar id is the dummy jar id
There is no recommended scrape interval because it is largely dependent
on your requirements.
For example, if you're fine with reacting to problems within an hour,
then a 5s scrape interval doesn't make sense.
The lower the interval the more resources must of course be spent on
serving the
Thanks Matthias.
We are using Prometheus for fetching metrics. Is there any recommended
scrape interval ?
Also is there any impact if lower scrape intervals are used?
Regards,
Ashutosh
On Fri, May 28, 2021 at 7:17 PM Matthias Pohl
wrote:
> Hi Ashutosh,
> you can set the metrics update
Hi Ashutosh,
you can set the metrics update interval
through metrics.fetcher.update-interval [1]. Unfortunately, there is no
single endpoint to collect all the metrics in a more efficient way other
than the metrics endpoints provided in [2].
I hope that helps.
Best,
Matthias
[1]
Hi team,
I have two queries as mentioned below:
*Query1:*
I am using PrometheusReporter to expose metrics to Prometheus Server.
What should be the minimum recommended scrape interval to be defined on
Prometheus server?
Is there any interval in which Flink reports metrics?
*Query2:*
Is there any
当前vertex的结点监控,有个获取全部指标的接口,和基于get参数逗号分割获取指标值的接口。
现在问题是我的采集脚本在获取监控值时候,因为是get导致超长,于是我5个5个的获取,但这导致我每30s一次采集,每次采集上百次请求,耗时达到几十秒。
是否可以搞个post接口;或者在metrics那个获取全部metric指标id的接口中就直接返回全部value呢?
Hi White,
Can you describe your problem in more detail?
* What is your Flink version?
* How do you deploy the job (application / session cluster), (Kubernetes,
Docker, YARN, ...)
* What kind of job are you running (DataStream, Table/SQL, DataSet)?
Best, Fabian
Am Mo., 20. Juli 2020 um 08:42
Hi,
When I using rest api to cancel my job , the rest 9 TM has been canceled
quickly , but the other one TM is always cancelling status , someone can show
me how can I solve the question .
Thanks,
White
Flink CLI是把-C的参数apply到了client端生成的JobGraph里,然后提交JobGraph来运行的
使用Rest方式提交,目前确实不支持针对单个Job设置classpath,我觉得这是一个合理的需求,可以提个JIRA
目前work around的办法只能是配置到cluster的configuration里面,在启动session的时候使用-C/--classpath
或者-D pipeline.classpaths=xxx,yyy,这样所有的job都会把它们增加到classpath里了
Best,
Yang
chenxuying 于2020年6月24日周三
目前使用的是flink 1.10.0
背景:
REST API有一个提交job的接口
接口 /jars/:jarid/run
参数entryClass,programArgs,parallelism,jobId,allowNonRestoredState,savepointPath
如果使用命令行方式提交job
flink run -C file:///usr/local/soft/flink/my-function-0.1.jar -c
cn.xuying.flink.table.sql.ParserSqlJob
:*Chesnay Schepler
*Sent:* 11 May 2020 13:20
*To:* Tomasz Dudziak ; user@flink.apache.org
*Subject:* Re: Flink REST API side effect?
This is expected, the backing data structure is cached for a while so
we never hammer the JobManager with requests.
IIRC this is controlled via "web.refresh-int
@flink.apache.org
Subject: Re: Flink REST API side effect?
This is expected, the backing data structure is cached for a while so we never
hammer the JobManager with requests.
IIRC this is controlled via "web.refresh-interval", with the default being 3
seconds.
On 11/05/2020 14:10, Tomasz Dud
This is expected, the backing data structure is cached for a while so we
never hammer the JobManager with requests.
IIRC this is controlled via "web.refresh-interval", with the default
being 3 seconds.
On 11/05/2020 14:10, Tomasz Dudziak wrote:
Hi,
I found an interesting behaviour of the
Hi,
I found an interesting behaviour of the REST API in my automated system tests
using that API where I was getting status of a purposefully failing job.
If you query job details immediately after job submission, subsequent queries
will return its status as RUNNING for a moment until Flink's
Sorry for the copy & paste error in my earlier message. I agree with
Robert.
On 2. Apr 2020, at 11:06, Robert Metzger wrote:
Good catch!. Yes, you can add this to FLINK-16696.
On Wed, Apr 1, 2020 at 10:59 PM Aaron Langford
wrote:
> All, it looks like the actual return structure from the
Good catch!. Yes, you can add this to FLINK-16696.
On Wed, Apr 1, 2020 at 10:59 PM Aaron Langford
wrote:
> All, it looks like the actual return structure from the API is:
>
> 1. Success
>
>> {
>> "status": {
>> "id": "completed"
>> },
>> *"operation"*: {
>> "location": "string"
>>
All, it looks like the actual return structure from the API is:
1. Success
> {
> "status": {
> "id": "completed"
> },
> *"operation"*: {
> "location": "string"
> }
> }
2. Failure
> {
> "status": {
> "id": "completed"
> },
> *"operation"*: {
> "failure-cause": {
>
Hey Aaron,
you can expect one of the two responses for COMPLETED savepoints [1, 2].
1. Success
{
"status": {
"id": "completed"
},
"savepoint": {
"location": "string"
}
}
2. Failure
{
"status": {
"id": "completed"
},
"savepoint": {
"failure-cause": {
Roman,
Thanks for the info. That's super helpful. I'd be interested in picking
that ticket up.
One additional question: the states that can return from this API are only
described as 'COMPLETED' or 'IN_PROGRESS'. How are failures represented for
this endpoint?
Aaron
On Fri, Mar 20, 2020 at
Hey Aaron,
You can use /jobs/:jobid/savepoints/:triggerid to get the location when the
checkpoint is completed.
Please see
https://ci.apache.org/projects/flink/flink-docs-release-1.10/api/java/index.html?org/apache/flink/runtime/rest/handler/job/savepoints/SavepointHandlers.html
Meanwhile, I've
Hey Flink Community,
I'm combing through docs right now, and I don't see that a savepoint
location is returned or surfaced anywhere. When I do this in the CLI, I get
a nice message that tells me where in S3 it put my savepoint (unique
savepoint ID included). I'm looking for that same result to be
My bad, I was looking at the wrong code path. The linked issue isn't
helpful, as it only slightly extends the exception message.
You cannot get the stacktrace in 1.7.X nor in the current RC for 1.8.0 .
I've filed https://issues.apache.org/jira/browse/FLINK-11902 to change this.
The 1.8.0 RC
Hey Chesnay,
Actually I was mistaken by stating that in the JobManager logs I got the
full stacktrace because I actually got the following there:
2019-03-13 11:55:13,906 ERROR
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler- Exception
occurred in REST handler:
Can you give me the stacktrace that is logged in the JobManager logs?
On 13.03.2019 10:57, Wouter Zorgdrager wrote:
Hi Chesnay,
Unfortunately this is not true when I run the Flink 1.7.2 docker
images. The response is still:
{
"errors": [
Hi Chesnay,
Unfortunately this is not true when I run the Flink 1.7.2 docker images.
The response is still:
{
"errors": [
"org.apache.flink.client.program.ProgramInvocationException: The
main method caused an error."
]
}
Regards,
Wouter Zorgdrager
Op wo 13 mrt. 2019 om 10:42
You should get the full stacktrace if you upgrade to 1.7.2 .
On 13.03.2019 09:55, Wouter Zorgdrager wrote:
Hey all!
I'm looking for some advice on the following; I'm working on an
abstraction on top of Apache Flink to 'pipeline' Flink applications
using Kafka. For deployment this means that
Hey all!
I'm looking for some advice on the following; I'm working on an abstraction
on top of Apache Flink to 'pipeline' Flink applications using Kafka. For
deployment this means that all these Flink jobs are embedded into one jar
and each job is started using an program argument (e.g. "--stage
rectory": "hdfs:///flinkDsl",
> "cancel-job": false
> }
>
> Let me know if that helps.
>
> Best,
> Gary
>
> On Mon, Nov 12, 2018 at 7:15 AM vino yang wrote:
>
>> Hi Henry,
>>
>> Maybe Gary can help you, ping him for yo
12, 2018 at 7:15 AM vino yang wrote:
> Hi Henry,
>
> Maybe Gary can help you, ping him for you.
>
> Thanks, vino.
>
> 徐涛 于2018年11月12日周一 下午12:45写道:
>
>> HI Experts,
>> I am trying to trigger a savepoint from Flink REST API on version 1.6 ,
>> in the docum
Hi Henry,
Maybe Gary can help you, ping him for you.
Thanks, vino.
徐涛 于2018年11月12日周一 下午12:45写道:
> HI Experts,
> I am trying to trigger a savepoint from Flink REST API on version 1.6 , in
> the document it shows that I need to pass a json as a request body
> {
> "type&
HI Experts,
I am trying to trigger a savepoint from Flink REST API on version 1.6 ,
in the document it shows that I need to pass a json as a request body
{
"type" : "object”,
Hi Vipul,
We are aware of YARN-2031. There are some ideas how to workaround it, which
are tracked here:
https://issues.apache.org/jira/browse/FLINK-9478
At the moment you have the following options:
1. Find out the master's address from ZooKeeper [1] and issue the HTTP
request against
Hello,
I have a question about flink 1.5/1.6 REST endpoints. I was trying to see
how the rest endpoints have changed wrt to cancelling with savepoint; it
seems like now to cancel with savepoint one need to use POST api /
jobs/:jobid/savepoints
> the command *“yarn application -list”*
>
>
>
> Thanks a lot.
>
>
>
> Regards,
>
> Raja.
>
>
>
> *From: *Gary Yao <g...@data-artisans.com>
> *Date: *Friday, February 9, 2018 at 9:25 AM
> *To: *Raja Aravapalli <raja.aravapa...@target.com>
k.apache.org>
Subject: Re: [EXTERNAL] Re: Flink REST API
Hi Raja,
Can you tell me the API call that you are trying to issue? If it is not a GET
request, it could be that you are suffering from this bug:
https://issues.apache.org/jira/browse/YARN-2031
In my case the tracking url shown on the resou
Thanks a lot again.
>
>
>
>
>
> Regards,
>
> Raja.
>
>
>
> *From: *Gary Yao <g...@data-artisans.com>
> *Date: *Friday, February 2, 2018 at 10:20 AM
> *To: *Raja Aravapalli <raja.aravapa...@target.com>
> *Cc: *"user@flink.apache.org" &
com>
Date: Friday, February 2, 2018 at 10:20 AM
To: Raja Aravapalli <raja.aravapa...@target.com>
Cc: "user@flink.apache.org" <user@flink.apache.org>
Subject: [EXTERNAL] Re: Flink REST API
Hi Raja,
The registered tracking URL of the YARN application can be used to issue HT
Hi Raja,
The registered tracking URL of the YARN application can be used to issue
HTTP
requests against the REST API. You can retrieve the URL by using the YARN
client:
yarn application -list
In the output, the rightmost column shows the URL, e.g.,
Application-Id ...
Hi,
I have a triggered a Flink YARN Session on Hadoop yarn.
While I was able to trigger applications and run them. I wish to find the URL
of REST API for the Flink YARN Sesssion I launched.
Can someone please help me point out on how to find the REST API Url for the
Flink on YARN?
Thanks a
Sorry guys,
in the previous message, when I talked about the task managers performance,
I meant *Jobmanager* performance
Francisco Gonzalez wrote
> Hi guys,
>
> After investigating a bit more about this topic, we found a solution
> adding
> a small change in the Flink-1.3.2 source code.
>
>
Hi guys,
After investigating a bit more about this topic, we found a solution adding
a small change in the Flink-1.3.2 source code.
We found that the issue occurred when different threads tried to build the
Tuple2 object at the same time (due to they use the
static
Hi,
Unfortunately, the FLIP-6 efforts are taking longer than expected and we won't
have those changes to the REST API in the 1.4 release (which should happen in
about a month). We are planning to very quickly release 1.5 after that, with
the changes to the REST API.
The only work-around I
Hello,
Going back on this thread, quick question: Will this be supported in next Flink
version? If not, when is it expected to be included?
Regards
On 8 Aug 2017, at 15:46, Aljoscha Krettek
> wrote:
I quickly talked to Till about this. The new
I quickly talked to Till about this. The new JobManager, once FLIP-6 is
implemented, will have a new REST endpoint that allows submitting a JobGraph
directly. With this, we no longer have to execute the user main() method in the
WebRuntimeMonitor (which is a component that the current
Aha ok… Thanks for your answer Eron.
Regards
On 7 Aug 2017, at 19:04, Eron Wright
> wrote:
When you submit a program via the REST API, the main method executes inside the
JobManager process.Unfortunately a static variable is used to
When you submit a program via the REST API, the main method executes inside
the JobManager process.Unfortunately a static variable is used to
establish the execution environment that the program obtains from
`ExecutionEnvironment.getExecutionEnvironment()`. From the stack trace it
appears
Hi there!
We are doing some POCs submitting jobs remotely to Flink. We tried with Flink
CLI and now we´re testing the Rest API.
So the point is that when we try to execute a set of requests in an async way
(using CompletableFutures) only a couple of them run successfully. For the rest
we get
53 matches
Mail list logo