Re: Elasticsearch 8 - FLINK-26088

2024-06-20 Thread Tauseef Janvekar
Dear Team,

I see that the connector version referred to is elasticsearch-3.1.0. But I
am not sure from where can I get sample code using this artifact and how to
download this artifact.

Any help is appreciated.

Thanks,
Tauseef

On Tue, 18 Jun 2024 at 18:55, Tauseef Janvekar 
wrote:

> Dear Team,
> As per https://issues.apache.org/jira/browse/FLINK-26088, elasticsearch 8
> support is already added but I do not see it in any documentation.
> Also the last version that supports any elasticsearch is 1.17.x.
>
> Can I get the steps on how to integrate with elastic 8 and some sample
> code would be appreciated.
>
> Thank you,
> Tauseef
>
>


Elasticsearch 8 - FLINK-26088

2024-06-18 Thread Tauseef Janvekar
Dear Team,
As per https://issues.apache.org/jira/browse/FLINK-26088, elasticsearch 8
support is already added but I do not see it in any documentation.
Also the last version that supports any elasticsearch is 1.17.x.

Can I get the steps on how to integrate with elastic 8 and some sample code
would be appreciated.

Thank you,
Tauseef


Elasticsearch8 example

2024-04-16 Thread Tauseef Janvekar
Dear Team,

Can anyone please share an example for flink-connector-elasticsearch8
I found this connector being added to the github. But no proper
documentation is present around it.

It will be of great help if a sample code is provided on the above
connector.

Thanks,
Tauseef


Elasticsearch sink suppport

2024-03-18 Thread Tauseef Janvekar
Dear Team,

We are using flink 1.17.2 as of now and we use elasticsearch.

Any version greater than 1.17 does not support elasticsearch sink connector
or as a matter of fact any connector like mongodb, opensearch, cassandra,
etc..

Is there any reason for this?

What should we do to upgrade to 1.18 or 1.19 when these connectors are not
available?

Is there any plan to release the connectors for 1.18 or 1.19?

Thanks,
Tauseef.


Elasticsearch Sink 1.17.2 error message

2024-01-25 Thread Tauseef Janvekar
Hi Team,

We get the below error message when we try to add an elastick sink
Caused by:
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException:
Could not forward element to next operator
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:92)
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50)
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
at
org.apache.flink.streaming.api.operators.StreamFilter.processElement(StreamFilter.java:39)
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75)
... 23 more
Caused by:
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException:
Could not forward element to next operator
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:92)
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50)
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
at
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
at
com.hds.alta.pipeline.topology.TopologyJob.lambda$workflow$cde51820$1(TopologyJob.java:186)
at
org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47)
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75)
... 27 more
Caused by: java.lang.IllegalArgumentException: cannot write xcontent for
unknown value of type class com.hds.alta.pipeline.model.TopologyDTO.

The code written for the same is here

workflow(filterItems(openTelSrc)).sinkTo(new
Elasticsearch7SinkBuilder().setBulkFlushMaxActions(1)

.setHosts(new HttpHost("elastic-host.com", 9200, "https"))

.setConnectionPassword("password").setConnectionUsername("elastic")

.setEmitter((element, context, indexer) -> indexer.add(createIndexRequest(
element))).build())

.name("topology_sink");


private static IndexRequest createIndexRequest(TopologyDTO data) {

Map json = new HashMap<>();

json.put("data", data);

return Requests.indexRequest()

.index("topology")

.id(data.getUuid()) //here uuid is String

.source(json);

}

Any help would be greatly appreciated.

Thanks,
Tauseef


Production deployment of Flink

2023-12-07 Thread Tauseef Janvekar
Hi Al,

I am using flink in my local setup and it works just fine - I installed it
using confluent example training course. Here I had to manually execute
start-cluster.sh and othe steps to start task managers.

We installed flink on kubernetes using bitnami helm chart and it works just
fine. But we get error messages like TM and JM are not able to communicate.
Should we use something else to deploy flink on kubernetes ?

Can someone please provide instructions on how to enable production ready
flink deployment for version 1.18 on kubernetes.

Thanks,
Tauseef


Re: Cast Exception

2023-12-05 Thread Tauseef Janvekar
Dear Team,

After changing the code to the below, error got resolved

 Map rules = alerts.entrySet().stream()

.collect(Collectors.toMap(e -> (String) e.getKey(), e -> Double.parseDouble
((String)e.getValue(;

Thanks,
Tauseef

On Tue, 5 Dec 2023 at 14:00, Tauseef Janvekar 
wrote:

> Dear Team,
>
> I am getting cast exception in flink.
> Caused by: org.apache.flink.client.program.ProgramInvocationException: The
> main method caused an error: class java.lang.String cannot be cast to class
> java.lang.Double (java.lang.String and java.lang.Double are in module
> java.base of loader 'bootstrap') at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
>
> The code that I wrote is
>
> Properties alerts = new Properties();
>
> try (InputStream stream = OtelTransformerJob.class
> .getClassLoader().getResourceAsStream("rule-based-config.txt")) {
>
> alerts.load(stream);
>
> }
>
> Map rules = alerts.entrySet().stream()
>
> .collect(Collectors.toMap(e -> (String) e.getKey(), e -> (Double) e
> .getValue()));
>
>
> Not sure what is the problem here
>
> Thanks,
> Tauseef
>


Cast Exception

2023-12-05 Thread Tauseef Janvekar
Dear Team,

I am getting cast exception in flink.
Caused by: org.apache.flink.client.program.ProgramInvocationException: The
main method caused an error: class java.lang.String cannot be cast to class
java.lang.Double (java.lang.String and java.lang.Double are in module
java.base of loader 'bootstrap') at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)

The code that I wrote is

Properties alerts = new Properties();

try (InputStream stream = OtelTransformerJob.class
.getClassLoader().getResourceAsStream("rule-based-config.txt")) {

alerts.load(stream);

}

Map rules = alerts.entrySet().stream()

.collect(Collectors.toMap(e -> (String) e.getKey(), e -> (Double) e
.getValue()));


Not sure what is the problem here

Thanks,
Tauseef


Conditional multi collect in flink

2023-12-04 Thread Tauseef Janvekar
Dear Team,

I was wondering if there is feature in flink that support conditional multi
colect.

Conditons
1. A stream is being processed and being converted to another stream/List
using flatMap.
2. The place where collector.collect() is being called can we have
multiple other collectors also ?
3. Can we add conditional collectors? Like if cond1 - then
collector1.collect(), elif cond2 then collector1.collect() and
collector2.collect().

I already know that I can do this serially by calling the same source
stream with a different flatMap and then convert and push using collector2

Thanks,
Tauseef


Re: Getting a list of tasks for a running job

2023-11-28 Thread Tauseef Janvekar
Hi Yuxin,

Added flink user group

Thanks,
Tauseef

On Tue, 28 Nov 2023 at 11:38, Tauseef Janvekar 
wrote:

> Hi Yuxin,
> We have deployed it on kubernetes using helm chart -
> https://github.com/bitnami/charts/blob/main/bitnami/flink/values.yaml
> We have used ingress and enabled basic authentication -
> https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
>
> No overrides were done on top of basic helm chart installation.
>
>
> The latest logs are below
> 2023-11-28 06:03:03,636 INFO
>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
> [] - Assigning splits to readers {0=[[Partition: aiops-2, StartingOffset:
> 3054, StoppingOffset: -9223372036854775808], [Partition: aiops-0,
> StartingOffset: 2944, StoppingOffset: -9223372036854775808], [Partition:
> aiops-3, StartingOffset: 3006, StoppingOffset: -9223372036854775808], [
> Partition: aiops-1, StartingOffset: 2945, StoppingOffset: -
> 9223372036854775808], [Partition: aiops-4, StartingOffset: 3181,
> StoppingOffset: -9223372036854775808], [Partition: aiops-5, StartingOffset:
> 3154, StoppingOffset: -9223372036854775808]]}
> 2023-11-28 06:03:21,526 INFO  org.apache.pekko.remote.transport.
> ProtocolStateActor [] - No response from remote for outbound
> association. Associate timed out after [2 ms].
> 2023-11-28 06:03:21,539 WARN  org.apache.pekko.remote.
> ReliableDeliverySupervisor   [] - Association with remote system
> [pekko.tcp://flink-metrics@flink-flink-server-taskmanager:35177] has
> failed, address is now gated for [50] ms. Reason: [Association failed
> with [pekko.tcp://flink-metrics@flink-flink-server-taskmanager:35177]]
> Caused by: [No response from remote for outbound association. Associate
> timed out after [2 ms].]
> 2023-11-28 06:03:21,559 WARN  org.apache.pekko.remote.transport.netty.
> NettyTransport   [] - Remote connection to [null] failed with
> org.jboss.netty.channel.ConnectTimeoutException: connection timed out:
> flink-flink-server-taskmanager/172.20.204.52:35177
> 2023-11-28 06:03:43,700 INFO  org.apache.pekko.remote.transport.
> ProtocolStateActor [] - No response from remote for outbound
> association. Associate timed out after [2 ms].
> 2023-11-28 06:03:43,701 WARN  org.apache.pekko.remote.
> ReliableDeliverySupervisor   [] - Association with remote system
> [pekko.tcp://flink-metrics@flink-flink-server-taskmanager:35177] has
> failed, address is now gated for [50] ms. Reason: [Association failed
> with [pekko.tcp://flink-metrics@flink-flink-server-taskmanager:35177]]
> Caused by: [No response from remote for outbound association. Associate
> timed out after [2 ms].]
> 2023-11-28 06:03:43,717 WARN  org.apache.pekko.remote.transport.netty.
> NettyTransport   [] - Remote connection to [null] failed with
> org.jboss.netty.channel.ConnectTimeoutException: connection timed out:
> flink-flink-server-taskmanager/172.20.204.52:35177
> Thanks,
> Tauseef
>
> On Tue, 28 Nov 2023 at 08:07, Yuxin Tan  wrote:
>
>> Hi, Tauseef,
>>
>> AFAIK, the most common way to get a list of tasks that a particular
>> job executes is through Flink's Web UI or REST API.
>>
>> Using the Flink Web UI:
>> When you run a Flink cluster, a Web UI is launched by default on port
>> 8081 of the JobManager. By accessing this Web UI through a browser,
>>  you can see a list of jobs, an overview of each job, and detailed
>> information about a specific job, including the tasks it executes.
>> Using the REST API:
>> For example, to get detailed information about a specific job, you
>> can call the following API: http://:8081/jobs/ [1]
>>
>> > facing the issue where job manager is not able to access task
>> manager but my jobs are completing with no issues.
>> This situation indeed seems peculiar. The limited information
>> provided makes it challenging to pinpoint the exact cause. I would
>> recommend examining the network state, reviewing the configurations,
>> and checking both the JobManager and TaskManager
>> logs for any anomalies or error messages
>>
>> [1]
>> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/ops/rest_api/
>>
>> Best,
>> Yuxin
>>
>>
>> Tauseef Janvekar  于2023年11月27日周一 19:32写道:
>>
>>> Dear Team,
>>>
>>> How do we get list of tasks that a particular job executes.
>>> If I go toTask Manager then I do not see any tasks. I am also facing the
>>> issue where job manager is not able to access task manager but my jobs are
>>> completing with no issues.
>>>
>>> Any help is appreciated.
>>>
>>> Thanks,
>>> Tauseef
>>>
>>


Re: Job Manager and Task Manager unable to communicate

2023-11-28 Thread Tauseef Janvekar
Adding flink user group

On Tue, 28 Nov 2023 at 13:39, Tauseef Janvekar 
wrote:

> Did you set some specific job manager or task manager deployment
> parameters ? - No
>
> Did you test without the basic ingress auth ? to be sure this is not
> related to that. - yes we did. And the problem persists.
>
> Please let me know if I can share anything else that might be useful.
>
>
> On Tue, 28 Nov 2023 at 12:52, Benoit Tailhades 
> wrote:
>
>> Did you set some specific job manager or task manager deployment
>> parameters ?
>>
>> Did you test without the basic ingress auth ? to be sure this is not
>> related to that.
>>
>> Le mar. 28 nov. 2023 à 06:58, Tauseef Janvekar 
>> a écrit :
>>
>>> Hi Benoit,
>>>
>>> Are your task manager and job manager on the same vm ?
>>>
>>> We have deployed it on kubernetes cluster with helm chart -
>>> https://github.com/bitnami/charts/blob/main/bitnami/flink/values.yaml
>>> So we cannot confirm if it is on the same vm/node.
>>> One more thing is that we have enabled authentication using basic
>>> ingress auth. -
>>> https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
>>>
>>> How did you configure the Job manager address in the task manager conf
>>> file ?
>>> Did you modify the binding in configuration files ?
>>> It got auto configured using helm chart. We did not modify anything on
>>> top of basic helm chart installation.
>>>
>>> Thanks,
>>> Tauseef
>>>
>>> On Mon, 27 Nov 2023 at 19:29, Benoit Tailhades <
>>> benoit.tailha...@gmail.com> wrote:
>>>
>>>> Hello, Tauseef,
>>>>
>>>> Can you give more details ? Are your task manager and job manager on
>>>> the same vm ?
>>>>
>>>> How did you configure the Job manager address in the task manager conf
>>>> file ?
>>>> Did you modify the binding in configuration files ?
>>>>
>>>> Benoit
>>>>
>>>> Le lun. 27 nov. 2023 à 14:29, Tauseef Janvekar <
>>>> tauseefjanve...@gmail.com> a écrit :
>>>>
>>>>> Dear Team,
>>>>>
>>>>> We are getting below error messages in our logs.
>>>>> Any help on how to resolve would be greatly appreciated.
>>>>>
>>>>> 2023-11-27 08:14:29,712 INFO  org.apache.pekko.remote.transport.
>>>>> ProtocolStateActor [] - No response from remote for outbound
>>>>> association. Associate timed out after [2 ms].
>>>>> 2023-11-27 08:14:29,713 WARN  org.apache.pekko.remote.
>>>>> ReliableDeliverySupervisor   [] - Association with remote
>>>>> system [pekko.tcp://flink-metrics@flink-taskmanager:34309] has
>>>>> failed, address is now gated for [50] ms. Reason: [Association failed
>>>>> with [pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [
>>>>> No response from remote for outbound association. Associate timed out
>>>>> after [2 ms].]
>>>>> 2023-11-27 08:14:29,730 WARN  org.apache.pekko.remote.transport.netty.
>>>>> NettyTransport   [] - Remote connection to [null] failed with
>>>>> org.jboss.netty.channel.ConnectTimeoutException: connection timed
>>>>> out: flink-taskmanager/172.20.237.127:34309
>>>>> 2023-11-27 08:14:58,401 INFO  org.apache.pekko.remote.transport.
>>>>> ProtocolStateActor [] - No response from remote for outbound
>>>>> association. Associate timed out after [2 ms].
>>>>> 2023-11-27 08:14:58,402 WARN  org.apache.pekko.remote.
>>>>> ReliableDeliverySupervisor   [] - Association with remote
>>>>> system [pekko.tcp://flink-metrics@flink-taskmanager:34309] has
>>>>> failed, address is now gated for [50] ms. Reason: [Association failed
>>>>> with [pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [
>>>>> No response from remote for outbound association. Associate timed out
>>>>> after [2 ms].]
>>>>> 2023-11-27 08:14:58,426 WARN  org.apache.pekko.remote.transport.netty.
>>>>> NettyTransport   [] - Remote connection to [null] failed with
>>>>> org.jboss.netty.channel.ConnectTimeoutException: connection timed
>>>>> out: flink-taskmanager/172.20.237.127:34309
>>>>> 2023-11-27 08:15:22,402 INFO  org.apache.pekko.remote.transport.
>>>>&

Getting a list of tasks for a running job

2023-11-27 Thread Tauseef Janvekar
Dear Team,

How do we get list of tasks that a particular job executes.
If I go toTask Manager then I do not see any tasks. I am also facing the
issue where job manager is not able to access task manager but my jobs are
completing with no issues.

Any help is appreciated.

Thanks,
Tauseef


Job Manager and Task Manager unable to communicate

2023-11-27 Thread Tauseef Janvekar
Dear Team,

We are getting below error messages in our logs.
Any help on how to resolve would be greatly appreciated.

2023-11-27 08:14:29,712 INFO  org.apache.pekko.remote.transport.
ProtocolStateActor [] - No response from remote for outbound
association. Associate timed out after [2 ms].
2023-11-27 08:14:29,713 WARN  org.apache.pekko.remote.
ReliableDeliverySupervisor   [] - Association with remote system
[pekko.tcp://flink-metrics@flink-taskmanager:34309] has failed, address is
now gated for [50] ms. Reason: [Association failed with
[pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [No
response from remote for outbound association. Associate timed out after [
2 ms].]
2023-11-27 08:14:29,730 WARN  org.apache.pekko.remote.transport.netty.
NettyTransport   [] - Remote connection to [null] failed with
org.jboss.netty.channel.ConnectTimeoutException: connection timed out:
flink-taskmanager/172.20.237.127:34309
2023-11-27 08:14:58,401 INFO  org.apache.pekko.remote.transport.
ProtocolStateActor [] - No response from remote for outbound
association. Associate timed out after [2 ms].
2023-11-27 08:14:58,402 WARN  org.apache.pekko.remote.
ReliableDeliverySupervisor   [] - Association with remote system
[pekko.tcp://flink-metrics@flink-taskmanager:34309] has failed, address is
now gated for [50] ms. Reason: [Association failed with
[pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [No
response from remote for outbound association. Associate timed out after [
2 ms].]
2023-11-27 08:14:58,426 WARN  org.apache.pekko.remote.transport.netty.
NettyTransport   [] - Remote connection to [null] failed with
org.jboss.netty.channel.ConnectTimeoutException: connection timed out:
flink-taskmanager/172.20.237.127:34309
2023-11-27 08:15:22,402 INFO  org.apache.pekko.remote.transport.
ProtocolStateActor [] - No response from remote for outbound
association. Associate timed out after [2 ms].
2023-11-27 08:15:22,403 WARN  org.apache.pekko.remote.
ReliableDeliverySupervisor   [] - Association with remote system
[pekko.tcp://flink-metrics@flink-taskmanager:34309] has failed, address is
now gated for [50] ms. Reason: [Association failed with
[pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [No
response from remote for outbound association. Associate timed out after [
2 ms].]
2023-11-27 08:15:22,434 WARN  org.apache.pekko.remote.transport.netty.
NettyTransport   [] - Remote connection to [null] failed with
org.jboss.netty.channel.ConnectTimeoutException: connection timed out:
flink-taskmanager/172.20.237.127:34309
2023-11-27 08:15:46,411 INFO  org.apache.pekko.remote.transport.
ProtocolStateActor [] - No response from remote for outbound
association. Associate timed out after [2 ms].
2023-11-27 08:15:46,412 WARN  org.apache.pekko.remote.
ReliableDeliverySupervisor   [] - Association with remote system
[pekko.tcp://flink-metrics@flink-taskmanager:34309] has failed, address is
now gated for [50] ms. Reason: [Association failed with
[pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [No
response from remote for outbound association. Associate timed out after [
2 ms].]
2023-11-27 08:15:46,436 WARN  org.apache.pekko.remote.transport.netty.
NettyTransport   [] - Remote connection to [null] failed with
org.jboss.netty.channel.ConnectTimeoutException: connection timed out:
flink-taskmanager/172.20.237.127:34309
2023-11-27 08:16:10,434 INFO  org.apache.pekko.remote.transport.
ProtocolStateActor [] - No response from remote for outbound
association. Associate timed out after [2 ms].
2023-11-27 08:16:10,435 WARN  org.apache.pekko.remote.
ReliableDeliverySupervisor   [] - Association with remote system
[pekko.tcp://flink-metrics@flink-taskmanager:34309] has failed, address is
now gated for [50] ms. Reason: [Association failed with
[pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [No
response from remote for outbound association. Associate timed out after [
2 ms].]
2023-11-27 08:16:10,477 WARN  org.apache.pekko.remote.transport.netty.
NettyTransport   [] - Remote connection to [null] failed with
org.jboss.netty.channel.ConnectTimeoutException: connection timed out:
flink-taskmanager/172.20.237.127:34309
2023-11-27 08:16:34,402 WARN  org.apache.pekko.remote.
ReliableDeliverySupervisor   [] - Association with remote system
[pekko.tcp://flink-metrics@flink-taskmanager:34309] has failed, address is
now gated for [50] ms. Reason: [Association failed with
[pekko.tcp://flink-metrics@flink-taskmanager:34309]] Caused by: [No
response from remote for outbound association. Associate timed out after [
2 ms].]
2023-11-27 08:16:34,402 INFO  org.apache.pekko.remote.transport.
ProtocolStateActor [] - No response from remote for outbound
association. Associate timed out after [2 ms].
2023-11-27 08:16:34,415 WARN  

Re: Confluent Kafka conection error

2023-11-24 Thread Tauseef Janvekar
Hi Hang,

Few more points regarding this issue

1. The issue does not replicate in my local installation and occurs only on
kubernetes server installation.
2. We have used kubernetes operator to install flink on the server

Please let me know if any other info is required here.

Thanks,
Tauseef

On Fri, 24 Nov 2023 at 15:50, Tauseef Janvekar 
wrote:

> Hi Hang,
>
> I cross checked this issue multiple times. I also upgraded to flink 1.18
> but the issue persists.
>
> Can you please let me know a few guidelines on how to investigate this and
> fix it positively.
>
> Thanks,
> Tauseef
>
> On Thu, 23 Nov 2023 at 18:08, Tauseef Janvekar 
> wrote:
>
>> Thanks Hang.
>>
>> I got it now. I will check on this and get back to you.
>>
>> Thanks,
>> Tauseef.
>>
>> On Thu, 23 Nov 2023 at 17:29, Hang Ruan  wrote:
>>
>>> Hi, Tauseef.
>>>
>>> This error is not that you can not access the Kafka cluster. Actually,
>>> this error means that the JM cannot access its TM.
>>> Have you ever checked whether the JM is able to access the TM?
>>>
>>> Best,
>>> Hang
>>>
>>> Tauseef Janvekar  于2023年11月23日周四 16:04写道:
>>>
>>>> Dear Team,
>>>>
>>>> We are facing the below issue while connecting to confluent kafka
>>>> Can someone please help here.
>>>>
>>>> 2023-11-23 06:09:36,989 INFO  org.apache.flink.runtime.executiongraph.
>>>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out
>>>> (1/1) 
>>>> (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>>>> switched from SCHEDULED to DEPLOYING.
>>>> 2023-11-23 06:09:36,994 INFO  org.apache.flink.runtime.executiongraph.
>>>> ExecutionGraph   [] - Deploying Source: src_source -> Sink: Print
>>>> to Std. Out (1/1) (attempt #0) with attempt id 
>>>> 496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0
>>>> and vertex id cbc357ccb763df2852fee8c4fc7d55f2_0 to flink-taskmanager:
>>>> 6122-23f057 @ flink-taskmanager.flink.svc.cluster.local (dataPort=46589)
>>>> with allocation id 80fe79389102bd305dd87a00247413eb
>>>> 2023-11-23 06:09:37,011 INFO
>>>>  org.apache.kafka.common.security.authenticator.AbstractLogin [] -
>>>> Successfully logged in.
>>>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>>>> AdminClientConfig [] - The configuration 'key.deserializer'
>>>> was supplied but isn't a known config.
>>>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>>>> AdminClientConfig [] - The configuration
>>>> 'value.deserializer' was supplied but isn't a known config.
>>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>>> AdminClientConfig [] - The configuration 'client.id.prefix'
>>>> was supplied but isn't a known config.
>>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>>> AdminClientConfig [] - The configuration '
>>>> partition.discovery.interval.ms' was supplied but isn't a known config.
>>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>>> AdminClientConfig [] - The configuration
>>>> 'commit.offsets.on.checkpoint' was supplied but isn't a known config.
>>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>>> AdminClientConfig [] - The configuration
>>>> 'enable.auto.commit' was supplied but isn't a known config.
>>>> 2023-11-23 06:09:37,111 WARN  org.apache.kafka.clients.admin.
>>>> AdminClientConfig [] - The configuration
>>>> 'auto.offset.reset' was supplied but isn't a known config.
>>>> 2023-11-23 06:09:37,113 INFO  org.apache.kafka.common.utils.
>>>> AppInfoParser  [] - Kafka version: 3.2.2
>>>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.
>>>> AppInfoParser  [] - Kafka commitId: 38c22ad893fb6cf5
>>>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.
>>>> AppInfoParser  [] - Kafka startTimeMs: 1700719777111
>>>> 2023-11-23 06:09:37,117 INFO
>>>>  org.apache.flink.connector.kafka.source.enumerator.
>>>> KafkaSourceEnumerator [] - Starting the KafkaSourceEnumerator for
>>>> consumer group null without periodic partition discovery.
&

Re: Confluent Kafka conection error

2023-11-24 Thread Tauseef Janvekar
Hi Hang,

I cross checked this issue multiple times. I also upgraded to flink 1.18
but the issue persists.

Can you please let me know a few guidelines on how to investigate this and
fix it positively.

Thanks,
Tauseef

On Thu, 23 Nov 2023 at 18:08, Tauseef Janvekar 
wrote:

> Thanks Hang.
>
> I got it now. I will check on this and get back to you.
>
> Thanks,
> Tauseef.
>
> On Thu, 23 Nov 2023 at 17:29, Hang Ruan  wrote:
>
>> Hi, Tauseef.
>>
>> This error is not that you can not access the Kafka cluster. Actually,
>> this error means that the JM cannot access its TM.
>> Have you ever checked whether the JM is able to access the TM?
>>
>> Best,
>> Hang
>>
>> Tauseef Janvekar  于2023年11月23日周四 16:04写道:
>>
>>> Dear Team,
>>>
>>> We are facing the below issue while connecting to confluent kafka
>>> Can someone please help here.
>>>
>>> 2023-11-23 06:09:36,989 INFO  org.apache.flink.runtime.executiongraph.
>>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out
>>> (1/1) 
>>> (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>>> switched from SCHEDULED to DEPLOYING.
>>> 2023-11-23 06:09:36,994 INFO  org.apache.flink.runtime.executiongraph.
>>> ExecutionGraph   [] - Deploying Source: src_source -> Sink: Print
>>> to Std. Out (1/1) (attempt #0) with attempt id 
>>> 496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0
>>> and vertex id cbc357ccb763df2852fee8c4fc7d55f2_0 to flink-taskmanager:
>>> 6122-23f057 @ flink-taskmanager.flink.svc.cluster.local (dataPort=46589)
>>> with allocation id 80fe79389102bd305dd87a00247413eb
>>> 2023-11-23 06:09:37,011 INFO
>>>  org.apache.kafka.common.security.authenticator.AbstractLogin [] -
>>> Successfully logged in.
>>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>>> AdminClientConfig [] - The configuration 'key.deserializer'
>>> was supplied but isn't a known config.
>>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>>> AdminClientConfig [] - The configuration
>>> 'value.deserializer' was supplied but isn't a known config.
>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>> AdminClientConfig [] - The configuration 'client.id.prefix'
>>> was supplied but isn't a known config.
>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>> AdminClientConfig [] - The configuration '
>>> partition.discovery.interval.ms' was supplied but isn't a known config.
>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>> AdminClientConfig [] - The configuration
>>> 'commit.offsets.on.checkpoint' was supplied but isn't a known config.
>>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>>> AdminClientConfig [] - The configuration
>>> 'enable.auto.commit' was supplied but isn't a known config.
>>> 2023-11-23 06:09:37,111 WARN  org.apache.kafka.clients.admin.
>>> AdminClientConfig [] - The configuration 'auto.offset.reset'
>>> was supplied but isn't a known config.
>>> 2023-11-23 06:09:37,113 INFO  org.apache.kafka.common.utils.
>>> AppInfoParser  [] - Kafka version: 3.2.2
>>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.
>>> AppInfoParser  [] - Kafka commitId: 38c22ad893fb6cf5
>>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.
>>> AppInfoParser  [] - Kafka startTimeMs: 1700719777111
>>> 2023-11-23 06:09:37,117 INFO
>>>  org.apache.flink.connector.kafka.source.enumerator.
>>> KafkaSourceEnumerator [] - Starting the KafkaSourceEnumerator for
>>> consumer group null without periodic partition discovery.
>>> 2023-11-23 06:09:37,199 INFO  org.apache.flink.runtime.executiongraph.
>>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out
>>> (1/1) 
>>> (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>>> switched from DEPLOYING to INITIALIZING.
>>> 2023-11-23 06:09:37,302 INFO
>>>  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] -
>>> Source Source: src_source registering reader for parallel task 0 (#0) @
>>> flink-taskmanager
>>> 2023-11-23 06:09:37,313 INFO  org.apache.flink.runtime.executiongraph.
>>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out
>>>

Re: Confluent Kafka conection error

2023-11-23 Thread Tauseef Janvekar
Thanks Hang.

I got it now. I will check on this and get back to you.

Thanks,
Tauseef.

On Thu, 23 Nov 2023 at 17:29, Hang Ruan  wrote:

> Hi, Tauseef.
>
> This error is not that you can not access the Kafka cluster. Actually,
> this error means that the JM cannot access its TM.
> Have you ever checked whether the JM is able to access the TM?
>
> Best,
> Hang
>
> Tauseef Janvekar  于2023年11月23日周四 16:04写道:
>
>> Dear Team,
>>
>> We are facing the below issue while connecting to confluent kafka
>> Can someone please help here.
>>
>> 2023-11-23 06:09:36,989 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (
>> 1/1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>> switched from SCHEDULED to DEPLOYING.
>> 2023-11-23 06:09:36,994 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Deploying Source: src_source -> Sink: Print to
>> Std. Out (1/1) (attempt #0) with attempt id 
>> 496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0
>> and vertex id cbc357ccb763df2852fee8c4fc7d55f2_0 to flink-taskmanager:
>> 6122-23f057 @ flink-taskmanager.flink.svc.cluster.local (dataPort=46589)
>> with allocation id 80fe79389102bd305dd87a00247413eb
>> 2023-11-23 06:09:37,011 INFO
>>  org.apache.kafka.common.security.authenticator.AbstractLogin [] -
>> Successfully logged in.
>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'key.deserializer'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'value.deserializer'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'client.id.prefix'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration '
>> partition.discovery.interval.ms' was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration
>> 'commit.offsets.on.checkpoint' was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'enable.auto.commit'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,111 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'auto.offset.reset'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,113 INFO  org.apache.kafka.common.utils.AppInfoParser
>>  [] - Kafka version: 3.2.2
>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
>>  [] - Kafka commitId: 38c22ad893fb6cf5
>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
>>  [] - Kafka startTimeMs: 1700719777111
>> 2023-11-23 06:09:37,117 INFO
>>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
>> [] - Starting the KafkaSourceEnumerator for consumer group null without
>> periodic partition discovery.
>> 2023-11-23 06:09:37,199 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (
>> 1/1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>> switched from DEPLOYING to INITIALIZING.
>> 2023-11-23 06:09:37,302 INFO
>>  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] -
>> Source Source: src_source registering reader for parallel task 0 (#0) @
>> flink-taskmanager
>> 2023-11-23 06:09:37,313 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (
>> 1/1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>> switched from INITIALIZING to RUNNING.
>> 2023-11-23 06:09:38,713 INFO
>>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
>> [] - Discovered new partitions: [aiops-3, aiops-2, aiops-1, aiops-0,
>> aiops-5, aiops-4]
>> 2023-11-23 06:09:38,719 INFO
>>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
>> [] - Assigning splits to readers {0=[[Partition: aiops-1, StartingOffset:
>> -1, StoppingOffset: -922337203

Confluent Kafka conection error

2023-11-22 Thread Tauseef Janvekar
Dear Team,

We are facing the below issue while connecting to confluent kafka
Can someone please help here.

2023-11-23 06:09:36,989 INFO  org.apache.flink.runtime.executiongraph.
ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (1/1)
(496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
switched from SCHEDULED to DEPLOYING.
2023-11-23 06:09:36,994 INFO  org.apache.flink.runtime.executiongraph.
ExecutionGraph   [] - Deploying Source: src_source -> Sink: Print to Std.
Out (1/1) (attempt #0) with attempt id
496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0
and vertex id cbc357ccb763df2852fee8c4fc7d55f2_0 to flink-taskmanager:6122-
23f057 @ flink-taskmanager.flink.svc.cluster.local (dataPort=46589) with
allocation id 80fe79389102bd305dd87a00247413eb
2023-11-23 06:09:37,011 INFO
 org.apache.kafka.common.security.authenticator.AbstractLogin [] -
Successfully logged in.
2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
AdminClientConfig [] - The configuration 'key.deserializer' was
supplied but isn't a known config.
2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
AdminClientConfig [] - The configuration 'value.deserializer'
was supplied but isn't a known config.
2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
AdminClientConfig [] - The configuration 'client.id.prefix' was
supplied but isn't a known config.
2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
AdminClientConfig [] - The configuration '
partition.discovery.interval.ms' was supplied but isn't a known config.
2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
AdminClientConfig [] - The configuration
'commit.offsets.on.checkpoint' was supplied but isn't a known config.
2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
AdminClientConfig [] - The configuration 'enable.auto.commit'
was supplied but isn't a known config.
2023-11-23 06:09:37,111 WARN  org.apache.kafka.clients.admin.
AdminClientConfig [] - The configuration 'auto.offset.reset'
was supplied but isn't a known config.
2023-11-23 06:09:37,113 INFO  org.apache.kafka.common.utils.AppInfoParser
   [] - Kafka version: 3.2.2
2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
   [] - Kafka commitId: 38c22ad893fb6cf5
2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
   [] - Kafka startTimeMs: 1700719777111
2023-11-23 06:09:37,117 INFO
 org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
[] - Starting the KafkaSourceEnumerator for consumer group null without
periodic partition discovery.
2023-11-23 06:09:37,199 INFO  org.apache.flink.runtime.executiongraph.
ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (1/1)
(496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
switched from DEPLOYING to INITIALIZING.
2023-11-23 06:09:37,302 INFO  org.apache.flink.runtime.source.coordinator.
SourceCoordinator [] - Source Source: src_source registering reader for
parallel task 0 (#0) @ flink-taskmanager
2023-11-23 06:09:37,313 INFO  org.apache.flink.runtime.executiongraph.
ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (1/1)
(496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
switched from INITIALIZING to RUNNING.
2023-11-23 06:09:38,713 INFO
 org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
[] - Discovered new partitions: [aiops-3, aiops-2, aiops-1, aiops-0, aiops-5,
aiops-4]
2023-11-23 06:09:38,719 INFO
 org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
[] - Assigning splits to readers {0=[[Partition: aiops-1, StartingOffset: -1,
StoppingOffset: -9223372036854775808], [Partition: aiops-2, StartingOffset:
-1, StoppingOffset: -9223372036854775808], [Partition: aiops-0,
StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition:
aiops-4, StartingOffset: -1, StoppingOffset: -9223372036854775808], [
Partition: aiops-3, StartingOffset: -1, StoppingOffset: -9223372036854775808],
[Partition: aiops-5, StartingOffset: -1, StoppingOffset: -
9223372036854775808]]}
2023-11-23 06:09:57,651 INFO  akka.remote.transport.ProtocolStateActor
[] - No response from remote for outbound association.
Associate timed out after [2 ms].
2023-11-23 06:09:57,651 WARN  akka.remote.ReliableDeliverySupervisor
[] - Association with remote system
[akka.tcp://flink-metrics@flink-taskmanager:33837] has failed, address is
now gated for [50] ms. Reason: [Association failed with
[akka.tcp://flink-metrics@flink-taskmanager:33837]] Caused by: [No response
from remote for outbound association. Associate timed out after [2 ms].]
2023-11-23 06:09:57,668 WARN  akka.remote.transport.netty.NettyTransport
[] - Remote connection to [null] failed 

flatmap returns a custom class object

2023-11-21 Thread Tauseef Janvekar
Dear Team,

I am getting the following error while using flatMap.
Caused by: org.apache.flink.client.program.ProgramInvocationException: The
main method caused an error: The return type of function
'defineWorkflow(OtelTransformerJob.java:75)' could not be determined
automatically, due to type erasure. You can give type information hints by
using the returns(...) method on the result of the transformation call, or
by letting your function implement the 'ResultTypeQueryable' interface.

On further investigation I found that flatMap should end with
.returns(Types.STRING) when the return type is a STRING.

My question is if my return type is a custom DTO named mypckg.MyCustomDTO
then how do I pass parameters to returns method.

Thanks,
Tauseef.


Java 17 as default

2023-11-14 Thread Tauseef Janvekar
Dear Team,

I saw the documentation for 1.18 and Java 17 is not supported and the image
is created from Java11. I guess there is separate docker image for java_17.
When do we plan to release main image with Java 17.

Thanks,
Tauseef


Elasticsearch source

2023-11-14 Thread Tauseef Janvekar
Dear Team,

We were looking for some elasticsearch source connector for flink and I
could not get any example though.

Is it possible to read from elasticsearch, technically it is not streaming
but somehow it should be supported as we want some transformation logic
based on elastic entries.

Any help here would be much appreciated.

Thanks,
Tauseef


Re: Error in /jars/upload curl request

2023-11-07 Thread Tauseef Janvekar
Hi Chen,

Thanks for your help. It worked fine.

Thanks,
Tauseef

On Tue, 7 Nov 2023 at 16:20, Yu Chen  wrote:

> Hi Tauseef,
>
> That's really dependent on the environment you're actually running in. But
> I'm guessing you're using ingress to route your requests to the JM POD.
> If so, I'd suggest you adjust the value of
> nginx.ingress.kubernetes.io/proxy-body-size.
>
> Following is an example.
> ```
> apiVersion: extensions/v1beta1
> kind: Ingress
> metadata:
>   annotations:
>kubernetes.io/ingress.class: nginx
>nginx.ingress.kubernetes.io/proxy-body-size: 250m # change this
>   name: xxx
>   namespace: xxx
> spec:
>   rules:
>   - host: flink-nyquist.hvreaning.com
> http:
>  paths:
>  - backend:
> serviceName: xxx
> servicePort: 8081
> ```
>
> Please let me know if there are any other problems.
>
> Best,Yu Chen
>
> > 2023年11月7日 18:40,Tauseef Janvekar  写道:
> >
> > Hi Chen,
> >
> > We are not using nginx anywhere on the server(kubernetes cluster) or on
> my client(my local machine).
> > Not sure how to proceed on this.
> >
> > Thanks,
> > Tauseef
> >
> > On Tue, 7 Nov 2023 at 13:36, Yu Chen  wrote:
> > Hi Tauseef,
> >
> > The error was caused by the nginx configuration and was not a flink
> problem.
> >
> > You can find many related solutions on the web [1].
> >
> > Best,
> > Yu Chen
> >
> > [1]
> https://stackoverflow.com/questions/24306335/413-request-entity-too-large-file-upload-issue
> >
> >> 2023年11月7日 15:14,Tauseef Janvekar  写道:
> >>
> >> Hi Chen,
> >>
> >> Now I get a different error message.
> >> root@R914SK4W:~/learn-building-flink-applications-in-java-exercises/exercises#
> curl -X POST -H "Expect:" -F "jarfile=@./target/travel-i
> >> tinerary-0.1.jar" https://flink-nyquist.hvreaning.com/jars/upload
> >> 
> >> 413 Request Entity Too Large
> >> 
> >> 413 Request Entity Too Large
> >> nginx
> >> 
> >> 
> >>
> >> Thanks
> >> Tauseef
> >>
> >> On Tue, 7 Nov 2023 at 06:19, Chen Yu  wrote:
> >>  Hi Tauseef,
> >>
> >> Adding an @ sign before the path will resolve your problem.
> >> And I verified that both web and postman upload the jar file properly
> on the master branch code.
> >> If you are still having problems then you can provide some more
> detailed information.
> >>
> >> Here are some documents of curl by `man curl`.
> >>
> >>-F, --form 
> >>   (HTTP SMTP IMAP) For HTTP protocol family, this lets curl
> emulate a filled-in form in which a user  has
> >>   pressed the submit button. This causes curl to POST data
> using the Content-Type multipart/form-data ac‐
> >>   cording to RFC 2388.
> >>
> >>   For SMTP and IMAP protocols, this is the means to compose
> a multipart mail message to transmit.
> >>
> >>   This enables uploading of binary files etc. To force the
> 'content' part to be a file, prefix  the  file
> >>   name  with an @ sign. To just get the content part from a
> file, prefix the file name with the symbol <.
> >>   The difference between @ and < is then that @ makes a
> file get attached in the post as a  file  upload,
> >>   while the < makes a text field and just get the contents
> for that text field from a file.
> >>
> >>
> >> Best,
> >> Yu Chen
> >> 发件人: Tauseef Janvekar 
> >> 发送时间: 2023年11月6日 22:27
> >> 收件人: user@flink.apache.org 
> >> 主题: Error in /jars/upload curl request   I am using curl request to
> upload a jar but it throws the below error
> >>
> >> 
> >> Received unknown attribute jarfile.
> >>
> >> Not sure what is wrong here. I am following the standard documentation
> >> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/
> >>
> >> Please let me know if I have to use some other command to upload a jar
> using "/jars/upload" endpoint
> >>
> >> I also tried to upload using webui but it hangs continuously and only
> calls GET api with 200 success- https://flink-nyquist.hvreaning.com/jars
> >>
> >> Thanks,
> >> Tauseef
> >
>
>


Re: Error in /jars/upload curl request

2023-11-07 Thread Tauseef Janvekar
Hi Chen,

We are not using nginx anywhere on the server(kubernetes cluster) or on my
client(my local machine).
Not sure how to proceed on this.

Thanks,
Tauseef

On Tue, 7 Nov 2023 at 13:36, Yu Chen  wrote:

> Hi Tauseef,
>
> The error was caused by the nginx configuration and was not a flink
> problem.
>
> You can find many related solutions on the web [1].
>
> Best,
> Yu Chen
>
> [1]
> https://stackoverflow.com/questions/24306335/413-request-entity-too-large-file-upload-issue
>
> 2023年11月7日 15:14,Tauseef Janvekar  写道:
>
> Hi Chen,
>
> Now I get a different error message.
> root@R914SK4W:~/learn-building-flink-applications-in-java-exercises/exercises#
> curl -X POST -H "Expect:" -F "jarfile=@./target/travel-i
> tinerary-0.1.jar" https://flink-nyquist.hvreaning.com/jars/upload
> 
> 413 Request Entity Too Large
> 
> 413 Request Entity Too Large
> nginx
> 
> 
>
> Thanks
> Tauseef
>
> On Tue, 7 Nov 2023 at 06:19, Chen Yu  wrote:
>
>>  Hi Tauseef,
>>
>> Adding an @ sign before the path will resolve your problem.
>> And I verified that both web and postman upload the jar file properly on
>> the master branch code.
>> If you are still having problems then you can provide some more detailed
>> information.
>>
>> Here are some documents of curl by `man curl`.
>>
>>-F, --form 
>>   (HTTP SMTP IMAP) For HTTP protocol family, this lets curl
>> emulate a filled-in form in which a user  has
>>   pressed the submit button. This causes curl to POST data
>> using the Content-Type multipart/form-data ac‐
>>   cording to RFC 2388.
>>
>>   For SMTP and IMAP protocols, this is the means to compose a
>> multipart mail message to transmit.
>>
>>   This enables uploading of binary files etc. To force the
>> 'content' part to be a file, prefix  the  file
>>   name  with an @ sign. To just get the content part from a
>> file, prefix the file name with the symbol <.
>>   The difference between @ and < is then that @ makes a file
>> get attached in the post as a  file  upload,
>>   while the < makes a text field and just get the contents
>> for that text field from a file.
>>
>>
>> Best,
>> Yu Chen
>> --
>> *发件人:* Tauseef Janvekar 
>> *发送时间:* 2023年11月6日 22:27
>> *收件人:* user@flink.apache.org 
>> *主题:* Error in /jars/upload curl request
>>
>> I am using curl request to upload a jar but it throws the below error
>>
>> 
>> Received unknown attribute jarfile.
>>
>> Not sure what is wrong here. I am following the standard documentation
>> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/
>>
>> Please let me know if I have to use some other command to upload a jar
>> using "/jars/upload" endpoint
>>
>> I also tried to upload using webui but it hangs continuously and only
>> calls GET api with 200 success- https://flink-nyquist.hvreaning.com/jars
>>
>> Thanks,
>> Tauseef
>>
>
>


Re: Error in /jars/upload curl request

2023-11-06 Thread Tauseef Janvekar
Hi Chen,

Now I get a different error message.
root@R914SK4W:~/learn-building-flink-applications-in-java-exercises/exercises#
curl -X POST -H "Expect:" -F "jarfile=@./target/travel-i
tinerary-0.1.jar" https://flink-nyquist.hvreaning.com/jars/upload

413 Request Entity Too Large

413 Request Entity Too Large
nginx



Thanks
Tauseef

On Tue, 7 Nov 2023 at 06:19, Chen Yu  wrote:

>  Hi Tauseef,
>
> Adding an @ sign before the path will resolve your problem.
> And I verified that both web and postman upload the jar file properly on
> the master branch code.
> If you are still having problems then you can provide some more detailed
> information.
>
> Here are some documents of curl by `man curl`.
>
>-F, --form 
>   (HTTP SMTP IMAP) For HTTP protocol family, this lets curl
> emulate a filled-in form in which a user  has
>   pressed the submit button. This causes curl to POST data
> using the Content-Type multipart/form-data ac‐
>   cording to RFC 2388.
>
>   For SMTP and IMAP protocols, this is the means to compose a
> multipart mail message to transmit.
>
>   This enables uploading of binary files etc. To force the
> 'content' part to be a file, prefix  the  file
>   name  with an @ sign. To just get the content part from a
> file, prefix the file name with the symbol <.
>   The difference between @ and < is then that @ makes a file
> get attached in the post as a  file  upload,
>   while the < makes a text field and just get the contents for
> that text field from a file.
>
>
> Best,
> Yu Chen
> --
> *发件人:* Tauseef Janvekar 
> *发送时间:* 2023年11月6日 22:27
> *收件人:* user@flink.apache.org 
> *主题:* Error in /jars/upload curl request
>
> I am using curl request to upload a jar but it throws the below error
>
> [image: image.png]
> Received unknown attribute jarfile.
>
> Not sure what is wrong here. I am following the standard documentation
> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/
>
> Please let me know if I have to use some other command to upload a jar
> using "/jars/upload" endpoint
>
> I also tried to upload using webui but it hangs continuously and only
> calls GET api with 200 success- https://flink-nyquist.hvreaning.com/jars
>
> Thanks,
> Tauseef
>


Error in /jars/upload curl request

2023-11-06 Thread Tauseef Janvekar
I am using curl request to upload a jar but it throws the below error

[image: image.png]
Received unknown attribute jarfile.

Not sure what is wrong here. I am following the standard documentation
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/

Please let me know if I have to use some other command to upload a jar
using "/jars/upload" endpoint

I also tried to upload using webui but it hangs continuously and only calls
GET api with 200 success- https://flink-nyquist.hvreaning.com/jars

Thanks,
Tauseef