Hello Community,
Please share info. for below query.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: Monday, March 11, 2024 1:18 PM
To: user@flink.apache.org
Subject: Flink performance
Hello,
Can you please point me to documentation if any such available where flink
talks about or documented
Hello,
Can you please point me to documentation if any such available where flink
talks about or documented performance numbers w.r.t certain use cases?
Rgds,
Kamal
it is feasible in this case?
From: Alexander Fedulov
Sent: 01 November 2023 01:54 AM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Flink custom parallel data source
Flink natively supports a pull-based model for sources, where the source
operators request data from the external system
in separate threads = new
Runnable () -> serversocket.accept();
So client socket will be accepted and given to separate thread for read data
from TCP stream.
Rgds,
Kamal
From: Alexander Fedulov
Sent: 31 October 2023 04:03 PM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Flink custom paral
Hello Community,
I need to have a custom parallel data source (Flink ParallelSourceFunction) for
fetching data based on some custom logic. In this source function, opening
multiple threads via java thread pool to distribute work further.
These threads share Flink provided 'SourceContext' and
Hello,
Can you please share that why Flink is not able to handle exception and keeps
on creating files continuously without closing?
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 21 September 2023 07:58 AM
To: Feng Jin
Cc: user@flink.apache.org
Subject: RE: About Flink parquet format
Yes
Yes.
Due to below error, Flink bulk writer never close the part file and keep on
creating new part file continuously. Is flink not handling exceptions like
below?
From: Feng Jin
Sent: 20 September 2023 05:54 PM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: About Flink parquet
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: About Flink parquet format
Hi Kamal
What exception did you encounter? I have tested it locally and it works fine.
Best,
Feng
On Mon, Sep 18, 2023 at 11:04 AM Kamal Mittal
mailto:kamal.mit...@ericsson.com>> wrote:
Hello,
Checkpo
Mittal
Cc: user@flink.apache.org
Subject: Re: About Flink parquet format
Hi Kamal
Check if the checkpoint of the task is enabled and triggered correctly. By
default, write parquet files will roll a new file when checkpointing.
Best,
Feng
On Thu, Sep 14, 2023 at 7:27 PM Kamal Mittal via user
Hello,
Tried parquet file creation with file sink bulk writer.
If configured parquet page size as low as 1 byte (allowed configuration) then
flink keeps on creating multiple 'in-progress' state files and with content
only as 'PAR1' and never closed the file.
I want to know what is the reason
writes to mysql.
If the job failover without checkpointing, the tasks will consume Kafka from
the earliest offset again.
I think it is best to enable checkpointing to restart job from the position
where the job stopped reading.
Best,
Hang
Kamal Mittal via user mailto:user@flink.apache.org>&g
Hello,
If checkpointing is NOT enabled and re-start strategy is configured then flink
retries the whole job execution i.e. enabling checkpointing is must for re-try
or not?
Rgds,
Kamal
Hello Community,
Please help me to find out inputs for below query.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 18 August 2023 08:04 AM
To: user@flink.apache.org
Subject: RE: Flink AVRO to Parquet writer - Row group size/Page size
Hello Community,
Please share views for below.
Rgds,
Kamal
Hello Community,
Please share views for below.
Rgds,
Kamal
From: Kamal Mittal
Sent: 17 August 2023 08:01 AM
To: Kamal Mittal ; user@flink.apache.org
Subject: RE: Flink AVRO to Parquet writer - Row group size/Page size
Hello Community,
Please share views for below.
Rgds,
Kamal
From: Kamal
Hello Community,
Please share views for below.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 16 August 2023 04:35 PM
To: user@flink.apache.org
Subject: Flink AVRO to Parquet writer - Row group size/Page size
Hello,
For Parquet, default row group size is 128 MB and Page size is 1MB but Flink
Hello,
For Parquet, default row group size is 128 MB and Page size is 1MB but Flink
Bulk writer using file sink create the files based on checkpointing interval
only.
So is there any significance of configured row group size and page size for
Flink parquet bulk writer? How Flink uses these
Hello,
Is it possible to create global/shared objects like static which are shared
among slots in a task manager?
Is it ok to create such objects in flink?
Rgds,
Kamal
Thanks for info.
Attached the POC code for reference. Class
ServerCustomSocketStreamFunction.java is custom server socket source and class
FlinkClientApp.java is main program.
From: liu ron
Sent: 05 August 2023 02:28 PM
To: Kamal Mittal
Cc: Paul Lam ; user@flink.apache.org
Subject: Re: Flink
Hello,
Can you please share views for below?
Rgds,
Kamal
From: Kamal Mittal
Sent: 04 August 2023 11:19 AM
To: Kamal Mittal ; Paul Lam
Cc: user@flink.apache.org
Subject: RE: Flink operator task opens threads internally
Hello,
I hope you have the context of use case clear now as described
Hello,
How flink behaves in case one of task manager POD fails out of a set of task
managers PODs over say K8s environment?
In my case, job remains in failed state even after giving re-start strategy
with fixed delay (5 sec) and no. of attempts (5) with error as "Could not
acquire minimum
application?
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 03 August 2023 09:27 AM
To: Paul Lam
Cc: user@flink.apache.org
Subject: RE: Flink operator task opens threads internally
Hello,
We have a client sending TCP traffic towards Flink application and to support
that there is server socket
Hello Shammon,
As it is said one split enumerator for one source means multiple sub-tasks of
that source (if parallelism >1) will use same split enumerator instance right?
Rgds,
Kamal
From: Shammon FY
Sent: 03 August 2023 10:54 AM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Fl
Hello Shammon,
Please have a look for below and share views.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 02 August 2023 08:02 AM
To: Shammon FY ; user@flink.apache.org
Subject: RE: Flink netty connector for TCP source
Thanks Shammon.
Purpose of opening server socket in Split Enumerator
in a flink task slot.
Rgds,
KAmal
From: Paul Lam
Sent: 03 August 2023 08:59 AM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Flink operator task opens threads internally
Hi Kamal,
It’s okay if you don’t mind the data order.
But it’s not very commonly seen to accept client sockets from
Hello Community,
Please share views for the below mail.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 02 August 2023 08:19 AM
To: user@flink.apache.org
Subject: Flink operator task opens threads internally
Hello Community,
I have a operator pipeline like as below, is it ok if "source&
Hello Community,
I have a operator pipeline like as below, is it ok if "source" task opens
threads by using java thread pool and parallelize the work?
This is needed for accepting multiple client socket connections in "single
custom source server socket function".
Single Custom source server
,
Kamal
From: Shammon FY
Sent: 02 August 2023 07:48 AM
To: Kamal Mittal ; user@flink.apache.org
Subject: Re: Flink netty connector for TCP source
Hi Kamal,
It confuses me a little that what's the purpose of opening a server socket in
SplitEnumerator? Currently there will be only one SplitEnumerator
Hell Community,
Need info. for below -
1. How many task managers a job manager can handle? Is there any upper limit
also?
1. How to decide no. of task managers, is there any way?
1. What is the difference between high no. of task managers vs high no. of
task slots (with low no.
is applicable for source
operator and not for split enumerator?
Please correct me if above understanding is not correct.
Rgds,
Kamal
From: Hang Ruan
Sent: 01 August 2023 08:55 AM
To: Kamal Mittal
Cc: liu ron ; user@flink.apache.org
Subject: Re: Flink netty connector for TCP source
Hi, Kamal
Best,
Shammon FY
On Thu, Jul 27, 2023 at 10:53 AM Kamal Mittal
mailto:kamal.mit...@ericsson.com>> wrote:
Hello Shammon,
Yes socket text stream I am aware of but was thinking if something like as
‘https://github.com/apache/bahir-flink/tree/master/flink-connector-netty<https://protect2
Hello,
Can you please share views about below mail?
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 28 July 2023 07:59 AM
To: Martijn Visser
Cc: user@flink.apache.org
Subject: RE: Custom TCP server socket source
Hello Martijn,
I followed the same link and created Enumerator
Socket connections (Socket object) as split (unbounded) in respective source
readers (task managers).
Please share your views.
[cid:image001.png@01D9C129.15A3B350]
Rgds,
Kamal
From: Martijn Visser
Sent: 27 July 2023 06:19 PM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Custom TCP server
Hello,
I need to write "Custom server socket source" which accepts client connections
over a port.
1. How to scale it across task managers with parallelism <= no. of task
managers and with same single port
2. This is needed w.r.t Kubernetes POD deployment model where each POD is
Hello Shammon,
Yes socket text stream I am aware of but was thinking if something like as
‘https://github.com/apache/bahir-flink/tree/master/flink-connector-netty’ is
also supported by Flink?
Rgds,
Kamal
From: Shammon FY
Sent: 27 July 2023 08:15 AM
To: Kamal Mittal
Cc: user@flink.apache.org
Hello,
Does flink provides netty connector for custom TCP source?
Any documentation details please share?
Rgds,
Kamal
Hello Community,
Please share views for below mail or let me know if any more details needed?
Rgds,
Kamal
From: Kamal Mittal
Sent: 24 July 2023 07:55 AM
To: Kamal Mittal ; user@flink.apache.org
Subject: RE: TCP server socket with Kubernetes Cluster
Hello Community,
Please share views
Hello Community,
Please share views for below mail.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 21 July 2023 02:02 PM
To: user@flink.apache.org
Subject: TCP server socket with Kubernetes Cluster
Hello,
Created a TCP server socket single source function and it is opened on a
single POD
Hello,
Created a TCP server socket single source function and it is opened on a
single POD (taskmanager) of Kubernetes cluster out of a set of PODs
(taskmanager) by Flink. Is there any way to know on which POD (taskmanager) it
is opened? Does Flink gives any such information?
This is needed
:41 AM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: About cluster.evenly-spread-out-slots
Hi Kamal,
Even if `cluster.evenly-spread-out-slots` is set to true, Flink will not
guarantee that same operator multiple tasks are never executed/scheduled on the
same task manager, it just means
Hello,
If property "cluster.evenly-spread-out-slots" is set to TRUE then Flink
guarantees that same operator multiple tasks are never executed/scheduled on
same task manager? Definitely this will depend upon parallelism value used for
an operator and no. of task slots available.
Like in below
Hello Community,
Please share views for below mail.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 14 July 2023 12:55 PM
To: user@flink.apache.org
Subject: TCP Socket stream scalability
Hello,
TCP Socket stream can be scaled across task managers similarly to file
enumerator and source reader
Hello,
TCP Socket stream can be scaled across task managers similarly to file
enumerator and source reader below?
Job is submitted with TCP socket source function and a socket will bind on a
port once and by a task manager. Is it possible to open socket at job manager
and then scale / divide
Hello Community,
I have a requirement to read data coming over TCP socket stream and for the
same written one custom source function reading data by TCP socket.
Job is running successfully but in flink dashboard checkpoint overview,
checkpointed data size is 0.
Can you please help if there is
Hello Community,
I have a requirement to read data coming over TCP socket stream and for the
same written one custom source function reading data by TCP socket.
Job is running successfully but while trying to take a savepoint, error comes
that savepoint cannot be taken.
Is there any
ood to stress this out.
Best,
Jan
On 6/29/23 12:53, Kamal Mittal via user wrote:
Hello Community,
I have created TCP stream custom source and reading data from TCP stream source.
But this TCP connection needs to be secured i.e. SSL based, query is how to
configure/provide certificates via Flink for
Hello Community,
I have created TCP stream custom source and reading data from TCP stream source.
But this TCP connection needs to be secured i.e. SSL based, query is how to
configure/provide certificates via Flink for Client-Server secured TCP
connection?
Rgds,
Kamal
June 2023 08:19 PM
To: user@flink.apache.org
Cc: Shammon FY ; Kamal Mittal
Subject: Re: Flink bulk and record file source format metrices
Hi Kamal,
In a similar situation, when a decoding failure happened I would generate a
special record that I could then detect/filter out (and increment
Hello,
Any way-forward, please suggest.
Rgds,
Kamal
From: Kamal Mittal via user
Sent: 15 June 2023 10:39 AM
To: Shammon FY
Cc: user@flink.apache.org
Subject: RE: Flink bulk and record file source format metrices
Hello,
I need one counter matric for no. of corrupt records while decoding
the way “SourceReaderBase” class maintaining one counter
for no. of records emitted.
Rgds,
Kamal
From: Shammon FY
Sent: 14 June 2023 05:33 PM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Flink bulk and record file source format metrices
Hi Kamal,
Can you give more information about
Hello,
Flink File source API as below used for reading AVRO - Parquet records. If any
record comes "null or is made null in between of a split in AVROParquetReader >
read() method" due to "bad record" situation then whole file split is
stopped/discarded.
Can you please guide about this that
Hello,
Using Flink record stream format file source API as below for parquet records
reading.
FileSource.FileSourceBuilder source =
FileSource.forRecordStreamFormat(streamformat, path);
source.monitorContinuously(Duration.ofMillis(1));
Want to log/generate metrices for corrupt records and
: Kamal Mittal via user
Sent: 07 June 2023 05:48 PM
To: Martijn Visser
Cc: Kamal Mittal via user
Subject: RE: Parquet decoding exception - Flink 1.16.x
Hello,
Metrices link given in below mail doesn’t give any way to create metrices for
source function right?
I am using below Flink API to read
” and create metrices for corrupt records?
FileSource.FileSourceBuilder source =
FileSource.forRecordStreamFormat(streamformat, path); //streamformat is of type
- AvroParquetRecordFormat
Please suggest.
Rgds,
Kamal
From: Martijn Visser
Sent: 07 June 2023 03:39 PM
To: Kamal Mittal
Cc: Kamal Mittal
such documentation.
Rgds,
Kamal
From: Martijn Visser
Sent: 07 June 2023 12:31 PM
To: Kamal Mittal
Cc: user@flink.apache.org
Subject: Re: Raise alarm for corrupt records
Hi Kamal,
No, but it should be straightforward to create metrics or events for these
types of situations and integrate them with your own
Hello Community,
Is there any way Flink provides out of box to raise alarm for corrupt records
(e.g. due to decoding failure) in between of running data pipeline and send
this alarm to outside of task manager process?
Rgds,
Kamal
my application if any.
>
> Rgds,
> Kamal
>
> On Wed, May 24, 2023 at 11:41 AM Kamal Mittal wrote:
>
>> Thanks Shammon for clarification.
>>
>> On Wed, May 24, 2023 at 11:01 AM Shammon FY wrote:
>>
>>> Hi Kamal,
>>>
>>> The source
t 11:41 AM Kamal Mittal wrote:
> Thanks Shammon for clarification.
>
> On Wed, May 24, 2023 at 11:01 AM Shammon FY wrote:
>
>> Hi Kamal,
>>
>> The source will slow down when there is backpressure in the flink job,
>> you can refer to docs [1] and [2] to get m
ps://nightlies.apache.org/flink/flink-docs-master/docs/ops/monitoring/back_pressure/
>
> On Tue, May 23, 2023 at 9:40 PM Kamal Mittal wrote:
>
>> Hello Community,
>>
>> Can you please share views about the query asked above w.r.t back
>> pressure for FileSource APIs
Hello Community,
Can you please share views about the query asked above w.r.t back pressure
for FileSource APIs for Bulk and Record stream formats.
Planning to use these APIs w.r.t AVRO to Parquet and vice-versa conversion.
Rgds,
Kamal
On Tue, 23 May 2023, 12:26 pm Kamal Mittal, wrote
Added Flink community DL as well.
-- Forwarded message -
From: Kamal Mittal
Date: Tue, May 23, 2023 at 7:57 AM
Subject: Re: Backpressure handling in FileSource APIs - Flink 1.16
To: Shammon FY
Hello,
Yes, want to take some custom actions and also if there is any default
Hello Community,
Can you please share views about the query asked above w.r.t back pressure
for FileSource APIs for Bulk and Record stream formats.
Planning to use these APIs w.r.t AVRO to Parquet and vice-versa conversion.
Rgds,
Kamal
On Thu, May 18, 2023 at 2:33 PM Kamal Mittal wrote
Hello Community,
Does FileSource APIs for Bulk and Record stream formats handle back
pressure by any way like slowing down sending data in piepline further or
reading data from source somehow?
Or does it give any callback/handle so that any action can be taken? Can
you please share details if
62 matches
Mail list logo