psql other services etc.
We would need to keep some sort of map of the flink dashboard and the
port number, not impossible but a bit of a admin nightmare when
adding/removing dashboards.
I will think some more
On 01/07/2023 00:34, Alexander Fedulov wrote:
> 3 - Not possible, the dashboards
sub_filter_last_modified off;
>
> sub_filter '' '';
>
> sub_filter 'Apache Flink Web Dashboard' 'flink:
> basic-ingress Dashboard';
>
> flinkConfiguration:
>
> taskmanager.numberOfTaskSlots: "2"
ry: "2048m"
cpu: 1
taskManager:
resource:
memory: "2048m"
cpu: 1
job:
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
parallelism: 2
upgradeMode: stateless
From: Mike Phillips
Sent: Thursday, June 29, 2023 7:42 AM
To:
ing Mike,
As a quick fix, sort of, you could use an Ingress on nginx-ingress
(instead of the port-forward) and
Add a sub_filter rule to patch the HTML response.
I use this to add a tag to the header and for the
Flink-Dashboard I experience no glitches.
As to point 3. … you don’t need to expo
Good Morning Mike,
As a quick fix, sort of, you could use an Ingress on nginx-ingress (instead of
the port-forward) and
Add a sub_filter rule to patch the HTML response.
I use this to add a tag to the header and for the Flink-Dashboard I
experience no glitches.
As to point 3. … you don’t need
e:
>
>> G'day all,
>>
>> Not sure if this is the correct place but...
>> We have a number of flink dashboards and it is difficult to know what
>> dashboard we are looking at.
>> Is there a configurable way to change the 'Apache Flink Dashboard
are looking at.
> Is there a configurable way to change the 'Apache Flink Dashboard' heading
> on the dashboard?
> Or some other way of uniquely identifying what dashboard I am currently
> looking at?
> Flink is running in k8s and we use kubectl port forwarding to connect to
G'day all,
Not sure if this is the correct place but...
We have a number of flink dashboards and it is difficult to know what
dashboard we are looking at.
Is there a configurable way to change the 'Apache Flink Dashboard' heading
on the dashboard?
Or some other way of uniquely i
Yes. Unless operator 2 is also back-pressured of course, then you should
take a look at the sink.
On 2/11/2021 4:50 AM, Marco Villalobos wrote:
given:
[source] -> [operator 1] -> [operator 2] -> [sink].
If within the dashboard, operator 1 shows that it has backpressure,
does that mean I need
given:
[source] -> [operator 1] -> [operator 2] -> [sink].
If within the dashboard, operator 1 shows that it has backpressure, does
that mean I need to improve the performance of operator 2 in order to
alleviate backpressure upon operator 1?
gt;>> Have you specified a custom "-Xmx" parameter?
>>>>
>>>> Thank you~
>>>>
>>>> Xintong Song
>>>>
>>>>
>>>>
>>>> On Fri, Jun 12, 2020 at 7:50 AM Vijay Balakrishnan
>>>> wrote:
>
i,
>>>> Get this error:
>>>> java.io.IOException: Insufficient number of network buffers: required
>>>> 2, but only 0 available. The total number of network buffers is currently
>>>> set to 877118 of 32768 bytes each. You can increase this number by
r by setting the
>>> configuration keys 'taskmanager.network.memory.fraction',
>>> 'taskmanager.network.memory.min', and 'taskmanager.network.memory.max'.
>>> akka.pattern.AskTimeoutException: Ask timed out on
>>> [Actor[akka://flink/user/di
atcher#-1420732632]] after [1 ms]. Message
>> of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A
>> typical reason for `AskTimeoutException` is that the recipient actor didn't
>> send a reply.
>>
>>
>> Followed docs here:
>>
>>
k.runtime.rpc.messages.LocalFencedMessage]. A
> typical reason for `AskTimeoutException` is that the recipient actor didn't
> send a reply.
>
>
> Followed docs here:
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/mem_setup.html
>
> network =
Please be aware that, how many tasks a Flink job has, and how
>>>>>>>>>> many slots a Flink cluster has, are two different things.
>>>>>>>>>> - The number of tasks are decided by your job's parallelism and
>>>>>>>>>> topology. E.g., if your job graph have 3 vertices A, B and C, with
>>>>>>>>>> parallelism 2, 3, 4 respectively. Then you would have totally 9
>>>>>>>>>> (2+3+4)
>>>>>>>>>> tasks.
>>>>>>>>>> - The number of slots are decided by number of TMs and
>>>>>>>>>> slots-per-TM.
>>>>>>>>>> - For streaming jobs, you have to make sure the number of slots
>>>>>>>>>> is enough for executing all your tasks. The number of slots needed
>>>>>>>>>> for
>>>>>>>>>> executing your job is by default the max parallelism of your job
>>>>>>>>>> graph
>>>>>>>>>> vertices. Take the above example, you would need 4 slots, because
>>>>>>>>>> it's the
>>>>>>>>>> max among all the vertices' parallelisms (2, 3, 4).
>>>>>>>>>>
>>>>>>>>>> In your case, the screenshot shows that you job has 9621 tasks in
>>>>>>>>>> total (not around 18000, the dark box shows total tasks while the
>>>>>>>>>> green box
>>>>>>>>>> shows running tasks), and 600 slots are in use (658 - 58) suggesting
>>>>>>>>>> that
>>>>>>>>>> the max parallelism of your job graph vertices is 600.
>>>>>>>>>>
>>>>>>>>>> If you want to increase the number of tasks, you should increase
>>>>>>>>>> your job parallelism. There are several ways to do that.
>>>>>>>>>>
>>>>>>>>>>- In your job codes (assuming you are using DataStream API)
>>>>>>>>>> - Use `StreamExecutionEnvironment#setParallelism()` to set
>>>>>>>>>> parallelism for all operators.
>>>>>>>>>> - Use `SingleOutputStreamOperator#setParallelism()` to set
>>>>>>>>>> parallelism for a specific operator. (Only supported for
>>>>>>>>>> subclasses of
>>>>>>>>>> `SingleOutputStreamOperator`.)
>>>>>>>>>>- When submitting your job, use `-p ` as an
>>>>>>>>>>argument for the `flink run` command, to set parallelism for all
>>>>>>>>>> operators.
>>>>>>>>>>- Set `parallelism.default` in your `flink-conf.yaml`, to set
>>>>>>>>>>a default parallelism for your jobs. This will be used for jobs
>>>>>>>>>> that have
>>>>>>>>>>not set parallelism with neither of the above methods.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thank you~
>>>>>>>>>>
>>>>>>>>>> Xintong Song
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sat, May 23, 2020 at 1:11 AM Vijay Balakrishnan <
>>>>>>>>>> bvija...@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Xintong,
>>>>>>>>>>> Thx for your reply. Increasing network memory buffers
>>>>>>>>>>> (fraction, min, max) seems to increase tasks slightly.
>>>>>>>>>>>
>>>>>>>>>>> Streaming job
>>>>>>>>>>> Standalone
>>>>>>>>>>>
>>>>>>>>>>> Vijay
>>>>>>>>>>>
>>>>>>>>>>> On Fri, May 22, 2020 at 2:49 AM Xintong Song <
>>>>>>>>>>> tonysong...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Vijay,
>>>>>>>>>>>>
>>>>>>>>>>>> I don't think your problem is related to number of opening
>>>>>>>>>>>> files. The parallelism of your job is decided before actually
>>>>>>>>>>>> tries to open
>>>>>>>>>>>> the files. And if the OS limit for opening files is reached, you
>>>>>>>>>>>> should see
>>>>>>>>>>>> a job execution failure, instead of a success execution with a
>>>>>>>>>>>> lower
>>>>>>>>>>>> parallelism.
>>>>>>>>>>>>
>>>>>>>>>>>> Could you share some more information about your use case?
>>>>>>>>>>>>
>>>>>>>>>>>>- What kind of job are your executing? Is it a streaming or
>>>>>>>>>>>>batch processing job?
>>>>>>>>>>>>- Which Flink deployment do you use? Standalone? Yarn?
>>>>>>>>>>>>- It would be helpful if you can share the Flink logs.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thank you~
>>>>>>>>>>>>
>>>>>>>>>>>> Xintong Song
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, May 20, 2020 at 11:50 PM Vijay Balakrishnan <
>>>>>>>>>>>> bvija...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>> I have increased the number of slots available but the Job is
>>>>>>>>>>>>> not using all the slots but runs into this approximate 18000
>>>>>>>>>>>>> Tasks limit.
>>>>>>>>>>>>> Looking into the source code, it seems to be opening file -
>>>>>>>>>>>>> https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/common/io/FileOutputFormat.java#L203
>>>>>>>>>>>>> So, do I have to tune the ulimit or something similar at the
>>>>>>>>>>>>> Ubuntu O/S level to increase number of tasks available ? What I
>>>>>>>>>>>>> am confused
>>>>>>>>>>>>> about is the ulimit is per machine but the ExecutionGraph is
>>>>>>>>>>>>> across many
>>>>>>>>>>>>> machines ? Please pardon my ignorance here. Does number of tasks
>>>>>>>>>>>>> equate to
>>>>>>>>>>>>> number of open files. I am using 15 slots per TaskManager on AWS
>>>>>>>>>>>>> m5.4xlarge
>>>>>>>>>>>>> which has 16 vCPUs.
>>>>>>>>>>>>>
>>>>>>>>>>>>> TIA.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Tue, May 19, 2020 at 3:22 PM Vijay Balakrishnan <
>>>>>>>>>>>>> bvija...@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Flink Dashboard UI seems to show tasks having a hard limit
>>>>>>>>>>>>>> for Tasks column around 18000 on a Ubuntu Linux box.
>>>>>>>>>>>>>> I kept increasing the number of slots per task manager to 15
>>>>>>>>>>>>>> and number of slots increased to 705 but the slots to tasks
>>>>>>>>>>>>>> stayed at around 18000. Below 18000 tasks, the Flink Job is
>>>>>>>>>>>>>> able to start up.
>>>>>>>>>>>>>> Even though I increased the number of slots, it still works
>>>>>>>>>>>>>> when 312 slots are being used.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> taskmanager.numberOfTaskSlots: 15
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> What knob can I tune to increase the number of Tasks ?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Pls find attached the Flink Dashboard UI.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> TIA,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
;>>>>> that
>>>>>>>> the max parallelism of your job graph vertices is 600.
>>>>>>>>
>>>>>>>> If you want to increase the number of tasks, you should increase
>>>>>>>> your job parallelism. Th
t;> min, max) seems to increase tasks slightly.
>>>>>>>
>>>>>>> Streaming job
>>>>>>> Standalone
>>>>>>>
>>>>>>> Vijay
>>>>>>>
>>>>>>> On Fri, May 22, 2020 at 2:4
I don't think your problem is related to number of opening files.
>>>>>>> The parallelism of your job is decided before actually tries to open the
>>>>>>> files. And if the OS limit for opening files is reached, you should see
>>>>>>
would be helpful if you can share the Flink logs.
>>>>>
>>>>>
>>>>> Thank you~
>>>>>
>>>>> Xintong Song
>>>>>
>>>>>
>>>>>
>>>>> On Wed, May 20, 2020 at 11:50 PM V
sed the number of slots available but the Job is not
>>>>> using all the slots but runs into this approximate 18000 Tasks limit.
>>>>> Looking into the source code, it seems to be opening file -
>>>>> https://github.com/apache/flink/blob/master/flink-cor
per machine but the ExecutionGraph is across many machines ?
>>> Please pardon my ignorance here. Does number of tasks equate to number of
>>> open files. I am using 15 slots per TaskManager on AWS m5.4xlarge which has
>>> 16 vCPUs.
>>>
>>> TIA.
>&
t I am confused about is
>> the ulimit is per machine but the ExecutionGraph is across many machines ?
>> Please pardon my ignorance here. Does number of tasks equate to number of
>> open files. I am using 15 slots per TaskManager on AWS m5.4xlarge which has
>> 16 vCPUs.
&g
achine but the ExecutionGraph is across many machines ?
> Please pardon my ignorance here. Does number of tasks equate to number of
> open files. I am using 15 slots per TaskManager on AWS m5.4xlarge which has
> 16 vCPUs.
>
> TIA.
>
> On Tue, May 19, 2020 at 3:22 PM Vijay Balakr
tasks equate to number of
open files. I am using 15 slots per TaskManager on AWS m5.4xlarge which has
16 vCPUs.
TIA.
On Tue, May 19, 2020 at 3:22 PM Vijay Balakrishnan
wrote:
> Hi,
>
> Flink Dashboard UI seems to show tasks having a hard limit for Tasks
> column around 18000 on a Ubun
way to hide the user configurations
from flink dashboard as show in the below?
Or else
Is there a way to pass the program arguments securely ?
Early response would be appreciated.
image.png
image.png
Thanks,
Vivekanand.
Hello All,
I m just wondering , is there a way to hide the user configurations from
flink dashboard as show in the below?
Or else
Is there a way to pass the program arguments securely ?
Early response would be appreciated.
[image: image.png]
[image: image.png]
Thanks,
Vivekanand.
Hi,
The SQL client can be started with
> ./bin/sql-client.sh embedded
Best, Fabian
Am Di., 30. Apr. 2019 um 20:13 Uhr schrieb Rad Rad :
> Thanks, Fabian.
>
> The problem was incorrect java path. Now, everything works fine.
>
> I would ask about the command for running sql-client.sh
>
> These
Thanks, Fabian.
The problem was incorrect java path. Now, everything works fine.
I would ask about the command for running sql-client.sh
These commands don't work
./sql-client.sh OR ./flink sql-client
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
With Flink 1.5.0, we introduced a new distributed architecture (see release
announcement [1] and FLIP-6 [2]).
>From what you describe, I cannot tell what is going wrong.
How do you submit your application?
Which action resulted in the error message you shared?
Btw. why do you go for Flink 1.
Hi,
I am using Flink 1.4.2 and I can't see my running jobs on Flink we
dashboard.
I downloaded Flink 1.5 and 1.6, I received this message when I tried to send
my arguments like this
--topic sensor --bootstrap.servers localhost:9092 --zookeeper.connect
localhost:2181 --group.id test-consumer-g
@flink.apache.org
Temat: Re: How to add jvm Options when using Flink dashboard?
You can't set JVM options when submitting through the Dashboard. This cannot be
implemented since no separate JVM is spun up when you submit a job that way.
On 05.09.2018 11:41, zpp wrote:
I wrote a task using Typesafe confi
like "-Dconfig.resource=dev.conf".
How can I do that with Flink dashboard?
Thanks for the help!
I wrote a task using Typesafe config. It must be pointed config file position
using jvm Options like "-Dconfig.resource=dev.conf".
How can I do that with Flink dashboard?
Thanks for the help!
you can setup a specific port using
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#rest-port.
On 03.09.2018 12:12, Mar_zieh wrote:
Hello
I added these dependencies to "pom.xml"; also, I added configuration to my
code like these:
Configuration config = new Configuration(
Hello
I added these dependencies to "pom.xml"; also, I added configuration to my
code like these:
Configuration config = new Configuration();
config.setBoolean(ConfigConstants.LOCAL_START_WEBSERVER, true);
StreamExecutionEnvironment env =
StreamExecutionEnvironment.createLocalEnvironment(getP,
What version of Flink are you running? Deployment method? Referenced
section of flink-conf.yaml?
On Wed, Mar 28, 2018 at 4:34 PM, Vinay Patil
wrote:
> Hi,
>
> I am not able to see more than 5 jobs on Flink Dashboard.
> I have set web.history to 50 in flink-conf.yaml file.
>
>
Hi,
I am not able to see more than 5 jobs on Flink Dashboard.
I have set web.history to 50 in flink-conf.yaml file.
Is there any other configuration I have to set to see more jobs on Flink
Dashboard
Regards,
Vinay Patil
Good catch! That should do it if you have access to the local storage
of the JobManager.
On Wed, Jul 20, 2016 at 5:25 PM, Aljoscha Krettek wrote:
> Hi,
> in the JobManager log there should be a line like this:
> 2016-07-20 17:19:00,552 INFO
> org.apache.flink.runtime.webmonitor.WebRuntimeMonitor
Hi,
in the JobManager log there should be a line like this:
2016-07-20 17:19:00,552 INFO
org.apache.flink.runtime.webmonitor.WebRuntimeMonitor - Using directory
/some/dir for web frontend JAR file uploads
if you manually delete the offending jar file from that directory it could
solve your problem
Hi Gary,
That is a bug. The main method might actually be there but it fails to
load a class:
> Caused by: java.lang.ClassNotFoundException:
> org.shaded.apache.flink.streaming.api.functions.source.SourceFunction
It looks like internal Flink classes have been shaded but not included
in the job j
Hi all,
I accidentally packaged a Flink Job for which the main method could not
be looked up. This breaks the Flink Dashboard's job submission page
(no jobs are displayed). I opened a ticket:
https://issues.apache.org/jira/browse/FLINK-4236
Is there a way to recover from this without restartin
Thanks a ton, Till.
That worked. Thank you so much.
-Biplob
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-t-access-Flink-Dashboard-at-8081-running-Flink-program-using-Eclipse-tp8016p8035.html
Sent from the Apache Flink User Mailing
t;
>> >> I am running my flink program using Eclipse and I can't access the
>> >> dashboard
>> >> at http://localhost:8081, can someone help me with this?
>> >>
>> >> I read that I need to check my flink-conf.yaml, but its a maven project
>> >>
read that I need to check my flink-conf.yaml, but its a maven project
> >> and
> >> I don't have a flink-conf.
> >>
> >> Any help would be really appreciated.
> >>
> >> Thanks a lot
> >> Biplob
> >>
> >>
> &g
need to check my flink-conf.yaml, but its a maven project
>>> and
>>> I don't have a flink-conf.
>>>
>>> Any help would be really appreciated.
>>>
>>> Thanks a lot
>>> Biplob
>>>
>>>
>>>
>>> --
&
a maven project
>> and
>> I don't have a flink-conf.
>>
>> Any help would be really appreciated.
>>
>> Thanks a lot
>> Biplob
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-flink-user-mailing-list-a
preciated.
>>
>> Thanks a lot
>> Biplob
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-t-access-Flink-Dashboard-at-8081-running-Flink-program-using-Eclipse-tp8016.html
>>
ally appreciated.
>
> Thanks a lot
> Biplob
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-t-access-Flink-Dashboard-at-8081-running-Flink-program-using-Eclipse-tp8016.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>
anks a lot
Biplob
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Can-t-access-Flink-Dashboard-at-8081-running-Flink-program-using-Eclipse-tp8016.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Hi Till,
thanks for the clarification. It all makes sense now.
So the keyBy call is more a partitioning scheme and less of an operator,
similar to Storm's field grouping, and Flink's other schemes such as forward
and broadcast. The difference is that it produces KeyedStreams, which are a
prere
Hi Leon,
yes, you're right. The plan visualization shows the actual tasks. Each task
can contain one or more (if chained) operators. A task is split into
sub-tasks which are executed on the TaskManagers. A TaskManager slot can
accommodate one subtask of each task (if the task has not been assigned
Ok, thanks a lot for the info guys!
On Thu, Oct 29, 2015 at 11:30 AM, Maximilian Michels wrote:
> Here's the jira issue for the cancel button:
> https://issues.apache.org/jira/browse/FLINK-2939
>
> On Thu, Oct 29, 2015 at 11:28 AM, Aljoscha Krettek
> wrote:
>
>> Hi
>> yes, a lot of people have
Here's the jira issue for the cancel button:
https://issues.apache.org/jira/browse/FLINK-2939
On Thu, Oct 29, 2015 at 11:28 AM, Aljoscha Krettek
wrote:
> Hi
> yes, a lot of people have complained about the missing cancel button
> already. :D (myself included)
>
> The number of retained jobs can
Hi
yes, a lot of people have complained about the missing cancel button already.
:D (myself included)
The number of retained jobs can be configured in conf/flink-conf.yaml by
setting the configuration key “jobmanager.web.history” to a different number.
Cheers,
Aljoscha
> On 29 Oct 2015, at 11
Yes, I was referring exactly to that :)
Thanks for the clarification Aljoscha.
Is it planned to improve the dashboard with some button to manage jobs
(cancel for example could be useful when running tests..)?
And where do I set the number of completed jobs to show in history?
On Thu, Oct 29, 2015
Hi,
are you referring to the “Job statistics/Accumulators” tab? This tab does not
display actual information but is a placeholder page that we forgot to remove.
It will be removed before the 0.10 release, there is currently a pull request
open to remove it.
Cheers,
Aljoscha
> On 29 Oct 2015, at
Hi to all,
I'm using Flink 0.10-SNAPSHOT and on my cluster I've tested the new
Dashboard (some days ago).
In the job info the parallelism was wrong (I see 2 but it's 36).
Does it happen only to me..?
Best,
Flavio
58 matches
Mail list logo