[jira] [Comment Edited] (FLINK-20628) Port RabbitMQ Sources to FLIP-27 API

2024-05-16 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846971#comment-17846971
 ] 

Ahmed Hamdy edited comment on FLINK-20628 at 5/16/24 2:25 PM:
--

[~lorenzo.affetti] could you help with this one if you have capacity? 



was (Author: JIRAUSER280246):
[~affo] could you help with this one if you have capacity? 

> Port RabbitMQ Sources to FLIP-27 API
> 
>
> Key: FLINK-20628
> URL: https://issues.apache.org/jira/browse/FLINK-20628
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Reporter: Jan Westphal
>Assignee: RocMarshal
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-assigned
>
> *Structure*
> The new RabbitMQ Source will have three components:
>  * RabbitMQ enumerator that receives one RabbitMQ Channel Config.
>  * RabbitMQ splits contain the RabbitMQ Channel Config
>  * RabbitMQ Readers which subscribe to the same RabbitMQ channel and receive 
> the messages (automatically load balanced by RabbitMQ).
> *Checkpointing Enumerators*
> The enumerator only needs to checkpoint the RabbitMQ channel config since the 
> continuous discovery of new unread/unhandled messages is taken care of by the 
> subscribed RabbitMQ readers and RabbitMQ itself.
> *Checkpointing Readers*
> The new RabbitMQ Source needs to ensure that every reader can be checkpointed.
> Since RabbitMQ is non-persistent and cannot be read by offset, a combined 
> usage of checkpoints and message acknowledgments is necessary. Until a 
> received message is checkpointed by a reader, it will stay in an 
> un-acknowledge state. As soon as the checkpoint is created, the messages from 
> the last checkpoint can be acknowledged as handled against RabbitMQ and thus 
> will be deleted only then. Messages need to be acknowledged one by one as 
> messages are handled by each SourceReader individually.
> When deserializing the messages we will make use of the implementation in the 
> existing RabbitMQ Source.
> *Message Delivery Guarantees* 
> Unacknowledged messages of a reader will be redelivered by RabbitMQ 
> automatically to other consumers of the same channel if the reader goes down.
>  
> This Source is going to only support at-least-once as this is the default 
> RabbitMQ behavior and thus everything else would require changes to RabbitMQ 
> itself or would impair the idea of parallelizing SourceReaders.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-20628) Port RabbitMQ Sources to FLIP-27 API

2024-05-16 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846971#comment-17846971
 ] 

Ahmed Hamdy commented on FLINK-20628:
-

[~affo] could you help with this one if you have capacity? 

> Port RabbitMQ Sources to FLIP-27 API
> 
>
> Key: FLINK-20628
> URL: https://issues.apache.org/jira/browse/FLINK-20628
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Reporter: Jan Westphal
>Assignee: RocMarshal
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-assigned
>
> *Structure*
> The new RabbitMQ Source will have three components:
>  * RabbitMQ enumerator that receives one RabbitMQ Channel Config.
>  * RabbitMQ splits contain the RabbitMQ Channel Config
>  * RabbitMQ Readers which subscribe to the same RabbitMQ channel and receive 
> the messages (automatically load balanced by RabbitMQ).
> *Checkpointing Enumerators*
> The enumerator only needs to checkpoint the RabbitMQ channel config since the 
> continuous discovery of new unread/unhandled messages is taken care of by the 
> subscribed RabbitMQ readers and RabbitMQ itself.
> *Checkpointing Readers*
> The new RabbitMQ Source needs to ensure that every reader can be checkpointed.
> Since RabbitMQ is non-persistent and cannot be read by offset, a combined 
> usage of checkpoints and message acknowledgments is necessary. Until a 
> received message is checkpointed by a reader, it will stay in an 
> un-acknowledge state. As soon as the checkpoint is created, the messages from 
> the last checkpoint can be acknowledged as handled against RabbitMQ and thus 
> will be deleted only then. Messages need to be acknowledged one by one as 
> messages are handled by each SourceReader individually.
> When deserializing the messages we will make use of the implementation in the 
> existing RabbitMQ Source.
> *Message Delivery Guarantees* 
> Unacknowledged messages of a reader will be redelivered by RabbitMQ 
> automatically to other consumers of the same channel if the reader goes down.
>  
> This Source is going to only support at-least-once as this is the default 
> RabbitMQ behavior and thus everything else would require changes to RabbitMQ 
> itself or would impair the idea of parallelizing SourceReaders.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-24298) Refactor Google PubSub sink to use Unified Sink API

2024-05-14 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846426#comment-17846426
 ] 

Ahmed Hamdy commented on FLINK-24298:
-

[~martijnvisser] I am happy to work on this,  to complete migration of sinks 
using deprecated {{SinkFunction}}.
Could you please assign to me?

> Refactor Google PubSub sink to use Unified Sink API
> ---
>
> Key: FLINK-24298
> URL: https://issues.apache.org/jira/browse/FLINK-24298
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Refactor Google PubSub source to use Unified Sink API 
> [FLIP-143|https://cwiki.apache.org/confluence/display/FLINK/FLIP-143%3A+Unified+Sink+API]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35322) PubSub Connector Weekly build fails

2024-05-12 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845682#comment-17845682
 ] 

Ahmed Hamdy commented on FLINK-35322:
-

[~Sergey Nuyanzin]
yes, let's keep it till after the release as we are not sure about the fix 
version at the moment.
Also this is not a blocker for the ongoing release since the weekly tests are 
only triggered from main branch so hence there should be no issue from the 
current release in my opinion. We just need to follow up with a ticket to 
re-enable 1.19 tests for 3.1 after the release.

> PubSub Connector Weekly build fails 
> 
>
> Key: FLINK-35322
> URL: https://issues.apache.org/jira/browse/FLINK-35322
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 3.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> Weekly builds for GCP pubSub connector is failing for 1.19 due to compilation 
> error in tests.
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8768752932/job/24063472769
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8863605354
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8954270618



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35322) PubSub Connector Weekly build fails

2024-05-09 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845044#comment-17845044
 ] 

Ahmed Hamdy commented on FLINK-35322:
-

[~snuyanzin] It seems that you have already provided the fix, we should bump 
the version under test once 3.1 is released. 

Would be nice to speed up the vote.

> PubSub Connector Weekly build fails 
> 
>
> Key: FLINK-35322
> URL: https://issues.apache.org/jira/browse/FLINK-35322
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 3.1.0
>Reporter: Ahmed Hamdy
>Priority: Major
>  Labels: test-stability
>
> Weekly builds for GCP pubSub connector is failing for 1.19 due to compilation 
> error in tests.
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8768752932/job/24063472769
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8863605354
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8954270618



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35322) PubSub Connector Weekly build fails

2024-05-09 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-35322:

Labels: test-stability  (was: )

> PubSub Connector Weekly build fails 
> 
>
> Key: FLINK-35322
> URL: https://issues.apache.org/jira/browse/FLINK-35322
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 3.1.0
>Reporter: Ahmed Hamdy
>Priority: Major
>  Labels: test-stability
>
> Weekly builds for GCP pubSub connector is failing for 1.19 due to compilation 
> error in tests.
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8768752932/job/24063472769
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8863605354
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8954270618



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35322) PubSub Connector Weekly build fails

2024-05-09 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-35322:
---

 Summary: PubSub Connector Weekly build fails 
 Key: FLINK-35322
 URL: https://issues.apache.org/jira/browse/FLINK-35322
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Google Cloud PubSub
Affects Versions: 3.1.0
Reporter: Ahmed Hamdy


Weekly builds for GCP pubSub connector is failing for 1.19 due to compilation 
error in tests.

https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8768752932/job/24063472769
https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8863605354
https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8954270618




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35320) RabbitMQ Source fails to consume from quorum queues when prefetch Count is set

2024-05-09 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-35320:
---

 Summary: RabbitMQ Source fails to consume from quorum queues when 
prefetch Count is set
 Key: FLINK-35320
 URL: https://issues.apache.org/jira/browse/FLINK-35320
 Project: Flink
  Issue Type: Bug
  Components: Connectors/ RabbitMQ
Affects Versions: 1.16.3, 1.16.2, 1.16.0
Reporter: Ahmed Hamdy
 Fix For: 3.1.0


h2. Description

{{RMQSource}} currently sets prefetch Count with [global QoS 
|https://github.com/apache/flink-connector-rabbitmq/blob/66e323a3e79befc08ae03f2789a8aa94b343d504/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQSource.java#L223]
 which is incompatible with [Quorum 
queues|https://www.rabbitmq.com/docs/quorum-queues#global-qos].


h2. Consideration

Currently the {{RMQSource}} implements {{SourceFunction}} which is deprecated 
from 1.18, the current RabbitMQ connector is compatible with 1.16 which is out 
of support, another approach would be migrating the source to the new API for 
the next connector release.   




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-21373) Port RabbitMQ Sink to FLIP-143 API

2024-05-03 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843173#comment-17843173
 ] 

Ahmed Hamdy edited comment on FLINK-21373 at 5/3/24 9:50 AM:
-

[~martijnvisser] Thanks for assigning it to me, I have already opened a 
[PR|https://github.com/apache/flink-connector-rabbitmq/pull/29], your review is 
appereciated


was (Author: JIRAUSER280246):
[~martijnvisser] Thanks for assigning it to me, I have already opened a 
[PR|[GitHub Pull Request 
#29|https://github.com/apache/flink-connector-rabbitmq/pull/29]], your review 
is appereciated

> Port RabbitMQ Sink to FLIP-143 API
> --
>
> Key: FLINK-21373
> URL: https://issues.apache.org/jira/browse/FLINK-21373
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Reporter: Jan Westphal
>Assignee: Ahmed Hamdy
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available
>
> *Structure*
> The unified Sink API provides a Writer, a Committer and a GlobalCommitter. 
> Right now we don’t see the need to use the Committer and GlobalCommitter as 
> the Writer is sufficient to hold up to the consistencies. Since we are in the 
> need of asynchronous RabbitMQ callbacks to know whether or not a message was 
> published successfully and have to store unacknowledged messages in the 
> checkpoint, there would be a large bidirectional communication and state 
> exchange overhead between the Writer and the Committer.
> *At-most-once*
> The Writer receives a message from Flink and simply publishes it to RabbitMQ. 
> The current RabbitMQ Sink only provides this mode.
> *At-least-once*
> The objective here is, to receive an acknowledgement by RabbitMQ for 
> published messages. Therefore, before publishing a message, we store the 
> message in a Map with the sequence number as its key. If the message is 
> acknowledged by RabbitMQ we can remove it from the Map. If we don’t receive 
> an acknowledgement for a certain amount of time (or a RabbitMQ specific so 
> called negative acknowledgement)  we will try to resend the message when 
> doing a checkpoint.
> *Exactly-once*
> On checkpointing we send all messages by Flink in transaction mode to 
> RabbitMQ. This way, all the messages get sent or are rolled back on failure. 
> All messages that are not sent successfully are written to the checkpoint and 
> are tried to be sent with the next checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-21373) Port RabbitMQ Sink to FLIP-143 API

2024-05-03 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843173#comment-17843173
 ] 

Ahmed Hamdy commented on FLINK-21373:
-

[~martijnvisser] Thanks for assigning it to me, I have already opened a 
[PR|[GitHub Pull Request 
#29|https://github.com/apache/flink-connector-rabbitmq/pull/29]], your review 
is appereciated

> Port RabbitMQ Sink to FLIP-143 API
> --
>
> Key: FLINK-21373
> URL: https://issues.apache.org/jira/browse/FLINK-21373
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Reporter: Jan Westphal
>Assignee: Ahmed Hamdy
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available
>
> *Structure*
> The unified Sink API provides a Writer, a Committer and a GlobalCommitter. 
> Right now we don’t see the need to use the Committer and GlobalCommitter as 
> the Writer is sufficient to hold up to the consistencies. Since we are in the 
> need of asynchronous RabbitMQ callbacks to know whether or not a message was 
> published successfully and have to store unacknowledged messages in the 
> checkpoint, there would be a large bidirectional communication and state 
> exchange overhead between the Writer and the Committer.
> *At-most-once*
> The Writer receives a message from Flink and simply publishes it to RabbitMQ. 
> The current RabbitMQ Sink only provides this mode.
> *At-least-once*
> The objective here is, to receive an acknowledgement by RabbitMQ for 
> published messages. Therefore, before publishing a message, we store the 
> message in a Map with the sequence number as its key. If the message is 
> acknowledged by RabbitMQ we can remove it from the Map. If we don’t receive 
> an acknowledgement for a certain amount of time (or a RabbitMQ specific so 
> called negative acknowledgement)  we will try to resend the message when 
> doing a checkpoint.
> *Exactly-once*
> On checkpointing we send all messages by Flink in transaction mode to 
> RabbitMQ. This way, all the messages get sent or are rolled back on failure. 
> All messages that are not sent successfully are written to the checkpoint and 
> are tried to be sent with the next checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-21373) Port RabbitMQ Sink to FLIP-143 API

2024-05-03 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843160#comment-17843160
 ] 

Ahmed Hamdy commented on FLINK-21373:
-

[~martijnvisser] I would love to work on this to close it out!

> Port RabbitMQ Sink to FLIP-143 API
> --
>
> Key: FLINK-21373
> URL: https://issues.apache.org/jira/browse/FLINK-21373
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors/ RabbitMQ
>Reporter: Jan Westphal
>Priority: Minor
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.12.0
>
>
> *Structure*
> The unified Sink API provides a Writer, a Committer and a GlobalCommitter. 
> Right now we don’t see the need to use the Committer and GlobalCommitter as 
> the Writer is sufficient to hold up to the consistencies. Since we are in the 
> need of asynchronous RabbitMQ callbacks to know whether or not a message was 
> published successfully and have to store unacknowledged messages in the 
> checkpoint, there would be a large bidirectional communication and state 
> exchange overhead between the Writer and the Committer.
> *At-most-once*
> The Writer receives a message from Flink and simply publishes it to RabbitMQ. 
> The current RabbitMQ Sink only provides this mode.
> *At-least-once*
> The objective here is, to receive an acknowledgement by RabbitMQ for 
> published messages. Therefore, before publishing a message, we store the 
> message in a Map with the sequence number as its key. If the message is 
> acknowledged by RabbitMQ we can remove it from the Map. If we don’t receive 
> an acknowledgement for a certain amount of time (or a RabbitMQ specific so 
> called negative acknowledgement)  we will try to resend the message when 
> doing a checkpoint.
> *Exactly-once*
> On checkpointing we send all messages by Flink in transaction mode to 
> RabbitMQ. This way, all the messages get sent or are rolled back on failure. 
> All messages that are not sent successfully are written to the checkpoint and 
> are tried to be sent with the next checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34071) Deadlock in AWS Kinesis Data Streams AsyncSink connector

2024-04-29 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842153#comment-17842153
 ] 

Ahmed Hamdy commented on FLINK-34071:
-

Additionally the timeout configuration setup is addressed in 
[FLIP-451|https://cwiki.apache.org/confluence/display/FLINK/FLIP-451%3A+Refactor+Async+Sink+API+and+Introduce+request+timeout+configuration],Let's
 keep the feedback on this part on the discussion thread.

> Deadlock in AWS Kinesis Data Streams AsyncSink connector
> 
>
> Key: FLINK-34071
> URL: https://issues.apache.org/jira/browse/FLINK-34071
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Kinesis
>Affects Versions: aws-connector-3.0.0, 1.15.4, aws-connector-4.2.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
>
> Sink operator hangs while flushing records, similarly to FLINK-32230. Error 
> observed even when using AWS SDK version that contains fix for async client 
> error handling [https://github.com/aws/aws-sdk-java-v2/pull/4402]
> Thread dump of stuck thread:
> {code:java}
> "sdk-async-response-1-6236" Id=11213 RUNNABLE
> at 
> app//org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$flush$5(AsyncSinkWriter.java:385)
> at 
> app//org.apache.flink.connector.base.sink.writer.AsyncSinkWriter$$Lambda$1778/0x000801141040.accept(Unknown
>  Source)
> at 
> org.apache.flink.connector.kinesis.sink.KinesisStreamsSinkWriter.handleFullyFailedRequest(KinesisStreamsSinkWriter.java:210)
> at 
> org.apache.flink.connector.kinesis.sink.KinesisStreamsSinkWriter.lambda$submitRequestEntries$1(KinesisStreamsSinkWriter.java:184)
> at 
> org.apache.flink.connector.kinesis.sink.KinesisStreamsSinkWriter$$Lambda$1965/0x0008011a0c40.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.utils.CompletableFutureUtils$$Lambda$1925/0x000801181840.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallMetricCollectionStage.lambda$execute$0(AsyncApiCallMetricCollectionStage.java:56)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallMetricCollectionStage$$Lambda$1961/0x000801191c40.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallTimeoutTrackingStage.lambda$execute$2(AsyncApiCallTimeoutTrackingStage.java:67)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallTimeoutTrackingStage$$Lambda$1960/0x000801191840.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> 

[jira] [Commented] (FLINK-34071) Deadlock in AWS Kinesis Data Streams AsyncSink connector

2024-04-29 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842152#comment-17842152
 ] 

Ahmed Hamdy commented on FLINK-34071:
-

[~a.pilipenko] Could we follow up and update Exception classifiers to propagate 
failures of malicious records?

> Deadlock in AWS Kinesis Data Streams AsyncSink connector
> 
>
> Key: FLINK-34071
> URL: https://issues.apache.org/jira/browse/FLINK-34071
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Connectors / Kinesis
>Affects Versions: aws-connector-3.0.0, 1.15.4, aws-connector-4.2.0
>Reporter: Aleksandr Pilipenko
>Priority: Major
>
> Sink operator hangs while flushing records, similarly to FLINK-32230. Error 
> observed even when using AWS SDK version that contains fix for async client 
> error handling [https://github.com/aws/aws-sdk-java-v2/pull/4402]
> Thread dump of stuck thread:
> {code:java}
> "sdk-async-response-1-6236" Id=11213 RUNNABLE
> at 
> app//org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$flush$5(AsyncSinkWriter.java:385)
> at 
> app//org.apache.flink.connector.base.sink.writer.AsyncSinkWriter$$Lambda$1778/0x000801141040.accept(Unknown
>  Source)
> at 
> org.apache.flink.connector.kinesis.sink.KinesisStreamsSinkWriter.handleFullyFailedRequest(KinesisStreamsSinkWriter.java:210)
> at 
> org.apache.flink.connector.kinesis.sink.KinesisStreamsSinkWriter.lambda$submitRequestEntries$1(KinesisStreamsSinkWriter.java:184)
> at 
> org.apache.flink.connector.kinesis.sink.KinesisStreamsSinkWriter$$Lambda$1965/0x0008011a0c40.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.utils.CompletableFutureUtils$$Lambda$1925/0x000801181840.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallMetricCollectionStage.lambda$execute$0(AsyncApiCallMetricCollectionStage.java:56)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallMetricCollectionStage$$Lambda$1961/0x000801191c40.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallTimeoutTrackingStage.lambda$execute$2(AsyncApiCallTimeoutTrackingStage.java:67)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallTimeoutTrackingStage$$Lambda$1960/0x000801191840.accept(Unknown
>  Source)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base@11.0.18/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
> at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-04-16 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837570#comment-17837570
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

Is there a reason why the issue is not Resolved?

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0, 1.20.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
>     

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-04-15 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837234#comment-17837234
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

Thanks [~dannycranmer] 
I created followups for 1.18, 1.19
[https://github.com/apache/flink/pull/24668]

[https://github.com/apache/flink/pull/24669]

 

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0, 1.20.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-35022) Add TypeInformed Element Converter for DynamoDbSink

2024-04-15 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837140#comment-17837140
 ] 

Ahmed Hamdy commented on FLINK-35022:
-

Hi [~dannycranmer] 
Thanks for the reply, I agree most POJOs could just be converted using bean 
converter. 
To be very honest, the main motivation came from Pyflink where type informed 
data types are commonly used, 
The BeanConverter couldn't be used in pyflink after supporting DDB Pyflink 
https://issues.apache.org/jira/browse/FLINK-32007 (Actually this is a blocker 
to this task) since defined pojos are python classes not java, this is resolved 
in datastream functions using TypeInformed functions as in [map function 
here.|https://github.com/apache/flink/blob/f74dc57561a058696bd2bd42593f862a9b490474/flink-python/pyflink/datastream/data_stream.py#L273]

> Add TypeInformed Element Converter for DynamoDbSink
> ---
>
> Key: FLINK-35022
> URL: https://issues.apache.org/jira/browse/FLINK-35022
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
>
> h2. Context
> {{DynamoDbSink}} as an extentsion of {{AsyncSinkBase}} depends on 
> {{org.apache.flink.connector.base.sink.writer.ElementConverter}} to convert 
> Flink stream objects to DynamoDb write requests, where item is represented as 
> {{Map}}.
> {{AttributeValue}} is the wrapper for the DynamoDb comprehendable Object in a 
> format similar with type identification properties as in
> {M": {"Name" : {"S": Joe }, "Age" : {"N": 35 }}}.
> Since TypeInformation is already natively supported in Flink, many 
> implementations of the DynamoDb ElementConverted is just a boiler plate. 
> For example 
> {code:title="Simple POJO Element Conversion"}
>  public class Order {
> String id;
> int quantity;
> double total;
> }
> {code}
> The implementation of the converter must be 
> {code:title="Simple POJO DDB Element Converter"}
> public static class SimplePojoElementConverter implements 
> ElementConverter {
> @Override
> public DynamoDbWriteRequest apply(Order order, SinkWriter.Context 
> context) {
> Map itemMap = new HashMap<>();
> itemMap.put("id", AttributeValue.builder().s(order.id).build());
> itemMap.put("quantity", 
> AttributeValue.builder().n(String.valueOf(order.quantity)).build());
> itemMap.put("total", 
> AttributeValue.builder().n(String.valueOf(order.total)).build());
> return DynamoDbWriteRequest.builder()
> .setType(DynamoDbWriteRequestType.PUT)
> .setItem(itemMap)
> .build();
> }
> @Override
> public void open(Sink.InitContext context) {
> 
> }
> }
> {code}
> while this might not be too much of work, however it is a fairly common case 
> in Flink and this implementation requires some fair knowledge of DDB model 
> for new users.
> h2. Proposal 
> Introduce {{ DynamoDbTypeInformedElementConverter}} as follows:
> {code:title="TypeInformedElementconverter"} 
> public class DynamoDbTypeInformedElementConverter implements 
> ElementConverter {
> DynamoDbTypeInformedElementConverter(CompositeType typeInfo);
> public DynamoDbWriteRequest convertElement(input) {
> switch this.typeInfo{
> case: BasicTypeInfo.STRING_TYPE_INFO: return input -> 
> AttributeValue.fromS(o.toString())
> case: BasicTypeInfo.SHORT_TYPE_INFO: 
> case: BasicTypeInfo.INTEGER_TYPE_INFO: input -> 
> AttributeValue.fromN(o.toString())
>case: TupleTypeInfo: input -> AttributeValue.fromL(converTuple(input))
>   .
> }
> }
> }
> // User Code
> public static void main(String []args) {
>   DynamoDbTypeInformedElementConverter elementConverter = new 
> DynamoDbTypeInformedElementConverter(TypeInformation.of(Order.class));
> DdbSink.setElementConverter(elementConverter); 
> }
> {code}
> We will start by supporting all Pojo/ basic/ Tuple/ Array typeInfo which 
> should be enough to cover all DDB supported types 
> (s,n,bool,b,ss,ns,bs,bools,m,l)
> 1- 
> https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/AttributeValue.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35064) Flink sql connector pulsar/hive com.fasterxml.jackson.annotation.JsonFormat$Value conflict

2024-04-11 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836041#comment-17836041
 ] 

Ahmed Hamdy commented on FLINK-35064:
-

AFAIK, We don't specifically do them unless there is a reason for it (CVEs/ 
deprecations/ part of new change).

> Flink sql connector pulsar/hive 
> com.fasterxml.jackson.annotation.JsonFormat$Value conflict
> --
>
> Key: FLINK-35064
> URL: https://issues.apache.org/jira/browse/FLINK-35064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Connectors / Pulsar
>Affects Versions: 1.16.1
>Reporter: elon_X
>Priority: Major
>  Labels: pull-request-available
>
> When I compile and package {{flink-sql-connector-pulsar}} & 
> {{{}flink-sql-connector-hive{}}}, and then put these two jar files into the 
> Flink lib directory, I execute the following SQL statement through 
> {{{}bin/sql-client.sh{}}}:
>  
> {code:java}
> // code placeholder
> CREATE TABLE
> pulsar_table (
> content string,
> proc_time AS PROCTIME ()
> )
> WITH
> (
> 'connector' = 'pulsar',
> 'topics' = 'persistent://xxx',
> 'service-url' = 'pulsar://xxx',
> 'source.subscription-name' = 'xxx',
> 'source.start.message-id' = 'latest',
> 'format' = 'csv',
> 'pulsar.client.authPluginClassName' = 
> 'org.apache.pulsar.client.impl.auth.AuthenticationToken',
> 'pulsar.client.authParams' = 'token:xxx'
> );
>  
> select * from pulsar_table; {code}
> The task error exception stack is as follows:
>  
> {code:java}
> Caused by: java.lang.NoSuchMethodError: 
> com.fasterxml.jackson.annotation.JsonFormat$Value.empty()Lcom/fasterxml/jackson/annotation/JsonFormat$Value;
> at 
> org.apache.pulsar.shade.com.fasterxml.jackson.databind.cfg.MapperConfig.(MapperConfig.java:56)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.shade.com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:660)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.shade.com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:576)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.common.util.ObjectMapperFactory.createObjectMapperInstance(ObjectMapperFactory.java:151)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.common.util.ObjectMapperFactory.(ObjectMapperFactory.java:142)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.client.impl.conf.ConfigurationDataUtils.create(ConfigurationDataUtils.java:35)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.client.impl.conf.ConfigurationDataUtils.(ConfigurationDataUtils.java:43)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.client.impl.ClientBuilderImpl.loadConf(ClientBuilderImpl.java:77)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.common.config.PulsarClientFactory.createClient(PulsarClientFactory.java:105)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.source.enumerator.PulsarSourceEnumerator.(PulsarSourceEnumerator.java:95)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.source.enumerator.PulsarSourceEnumerator.(PulsarSourceEnumerator.java:76)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.source.PulsarSource.createEnumerator(PulsarSource.java:144)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:213)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
> {code}
>  
> The exception shows a conflict with 
> {{{}com.fasterxml.jackson.annotation.JsonFormat$Value{}}}. I investigated and 
> found that {{flink-sql-connector-pulsar}} and {{flink-sql-connector-hive}} 
> depend on different versions, leading to this conflict.
> {code:java}
> // flink-sql-connector-pulsar pom.xml
> 
>     com.fasterxml.jackson
>     jackson-bom
>     pom
>     import
>     2.13.4.20221013
>  
> // flink-sql-connector-hive pom.xml
> 
> com.fasterxml.jackson
> jackson-bom
> pom
> import
> 2.15.3
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35064) Flink sql connector pulsar/hive com.fasterxml.jackson.annotation.JsonFormat$Value conflict

2024-04-10 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835711#comment-17835711
 ] 

Ahmed Hamdy commented on FLINK-35064:
-

While we try to maintain dependency upgrades across connectors this deflection 
is expected since each connector still evolves independently.
I would use maven relocation 
([https://maven.apache.org/plugins/maven-shade-plugin/examples/class-relocation.html)]
 to resolve the issue.
If you want to open another ticket for upgrade for pulsar this can be done but 
this will only take effect on the next minor version release of the connector

> Flink sql connector pulsar/hive 
> com.fasterxml.jackson.annotation.JsonFormat$Value conflict
> --
>
> Key: FLINK-35064
> URL: https://issues.apache.org/jira/browse/FLINK-35064
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Connectors / Pulsar
>Affects Versions: 1.16.1
>Reporter: elon_X
>Priority: Major
>
> When I compile and package {{flink-sql-connector-pulsar}} & 
> {{{}flink-sql-connector-hive{}}}, and then put these two jar files into the 
> Flink lib directory, I execute the following SQL statement through 
> {{{}bin/sql-client.sh{}}}:
>  
> {code:java}
> // code placeholder
> CREATE TABLE
> pulsar_table (
> content string,
> proc_time AS PROCTIME ()
> )
> WITH
> (
> 'connector' = 'pulsar',
> 'topics' = 'persistent://xxx',
> 'service-url' = 'pulsar://xxx',
> 'source.subscription-name' = 'xxx',
> 'source.start.message-id' = 'latest',
> 'format' = 'csv',
> 'pulsar.client.authPluginClassName' = 
> 'org.apache.pulsar.client.impl.auth.AuthenticationToken',
> 'pulsar.client.authParams' = 'token:xxx'
> );
>  
> select * from pulsar_table; {code}
> The task error exception stack is as follows:
>  
> {code:java}
> Caused by: java.lang.NoSuchMethodError: 
> com.fasterxml.jackson.annotation.JsonFormat$Value.empty()Lcom/fasterxml/jackson/annotation/JsonFormat$Value;
> at 
> org.apache.pulsar.shade.com.fasterxml.jackson.databind.cfg.MapperConfig.(MapperConfig.java:56)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.shade.com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:660)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.shade.com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:576)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.common.util.ObjectMapperFactory.createObjectMapperInstance(ObjectMapperFactory.java:151)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.common.util.ObjectMapperFactory.(ObjectMapperFactory.java:142)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.client.impl.conf.ConfigurationDataUtils.create(ConfigurationDataUtils.java:35)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.client.impl.conf.ConfigurationDataUtils.(ConfigurationDataUtils.java:43)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.pulsar.client.impl.ClientBuilderImpl.loadConf(ClientBuilderImpl.java:77)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.common.config.PulsarClientFactory.createClient(PulsarClientFactory.java:105)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.source.enumerator.PulsarSourceEnumerator.(PulsarSourceEnumerator.java:95)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.source.enumerator.PulsarSourceEnumerator.(PulsarSourceEnumerator.java:76)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.connector.pulsar.source.PulsarSource.createEnumerator(PulsarSource.java:144)
>  ~[flink-sql-connector-pulsar-4.0-SNAPSHOT.jar:4.0-SNAPSHOT]at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinator.start(SourceCoordinator.java:213)
>  ~[flink-dist_2.12-1.16.1.jar:1.16.1]
> {code}
>  
> The exception shows a conflict with 
> {{{}com.fasterxml.jackson.annotation.JsonFormat$Value{}}}. I investigated and 
> found that {{flink-sql-connector-pulsar}} and {{flink-sql-connector-hive}} 
> depend on different versions, leading to this conflict.
> {code:java}
> // flink-sql-connector-pulsar pom.xml
> 
>     com.fasterxml.jackson
>     jackson-bom
>     pom
>     import
>     2.13.4.20221013
>  
> // flink-sql-connector-hive pom.xml
> 
> com.fasterxml.jackson
> jackson-bom
> pom
> import
> 2.15.3
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-35050) Remove Lazy Initialization of DynamoDbBeanElementConverter

2024-04-08 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy resolved FLINK-35050.
-
Resolution: Duplicate

> Remove Lazy Initialization of DynamoDbBeanElementConverter
> --
>
> Key: FLINK-35050
> URL: https://issues.apache.org/jira/browse/FLINK-35050
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Minor
>
> h2. Description
> {{DynamoDbBeanElementConverter}} implements {{ElementConverter}} which now 
> supports open method as of FLINK-29938, we need to remove lazy initialization 
> given that it is now unblocked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35050) Remove Lazy Initialization of DynamoDbBeanElementConverter

2024-04-08 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-35050:

Labels:   (was: pull-request-available)

> Remove Lazy Initialization of DynamoDbBeanElementConverter
> --
>
> Key: FLINK-35050
> URL: https://issues.apache.org/jira/browse/FLINK-35050
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Minor
>
> h2. Description
> {{DynamoDbBeanElementConverter}} implements {{ElementConverter}} which now 
> supports open method as of FLINK-29938, we need to remove lazy initialization 
> given that it is now unblocked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35022) Add TypeInformed Element Converter for DynamoDbSink

2024-04-08 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-35022:

Priority: Major  (was: Minor)

> Add TypeInformed Element Converter for DynamoDbSink
> ---
>
> Key: FLINK-35022
> URL: https://issues.apache.org/jira/browse/FLINK-35022
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h2. Context
> {{DynamoDbSink}} as an extentsion of {{AsyncSinkBase}} depends on 
> {{org.apache.flink.connector.base.sink.writer.ElementConverter}} to convert 
> Flink stream objects to DynamoDb write requests, where item is represented as 
> {{Map}}.
> {{AttributeValue}} is the wrapper for the DynamoDb comprehendable Object in a 
> format similar with type identification properties as in
> {M": {"Name" : {"S": Joe }, "Age" : {"N": 35 }}}.
> Since TypeInformation is already natively supported in Flink, many 
> implementations of the DynamoDb ElementConverted is just a boiler plate. 
> For example 
> {code:title="Simple POJO Element Conversion"}
>  public class Order {
> String id;
> int quantity;
> double total;
> }
> {code}
> The implementation of the converter must be 
> {code:title="Simple POJO DDB Element Converter"}
> public static class SimplePojoElementConverter implements 
> ElementConverter {
> @Override
> public DynamoDbWriteRequest apply(Order order, SinkWriter.Context 
> context) {
> Map itemMap = new HashMap<>();
> itemMap.put("id", AttributeValue.builder().s(order.id).build());
> itemMap.put("quantity", 
> AttributeValue.builder().n(String.valueOf(order.quantity)).build());
> itemMap.put("total", 
> AttributeValue.builder().n(String.valueOf(order.total)).build());
> return DynamoDbWriteRequest.builder()
> .setType(DynamoDbWriteRequestType.PUT)
> .setItem(itemMap)
> .build();
> }
> @Override
> public void open(Sink.InitContext context) {
> 
> }
> }
> {code}
> while this might not be too much of work, however it is a fairly common case 
> in Flink and this implementation requires some fair knowledge of DDB model 
> for new users.
> h2. Proposal 
> Introduce {{ DynamoDbTypeInformedElementConverter}} as follows:
> {code:title="TypeInformedElementconverter"} 
> public class DynamoDbTypeInformedElementConverter implements 
> ElementConverter {
> DynamoDbTypeInformedElementConverter(CompositeType typeInfo);
> public DynamoDbWriteRequest convertElement(input) {
> switch this.typeInfo{
> case: BasicTypeInfo.STRING_TYPE_INFO: return input -> 
> AttributeValue.fromS(o.toString())
> case: BasicTypeInfo.SHORT_TYPE_INFO: 
> case: BasicTypeInfo.INTEGER_TYPE_INFO: input -> 
> AttributeValue.fromN(o.toString())
>case: TupleTypeInfo: input -> AttributeValue.fromL(converTuple(input))
>   .
> }
> }
> }
> // User Code
> public static void main(String []args) {
>   DynamoDbTypeInformedElementConverter elementConverter = new 
> DynamoDbTypeInformedElementConverter(TypeInformation.of(Order.class));
> DdbSink.setElementConverter(elementConverter); 
> }
> {code}
> We will start by supporting all Pojo/ basic/ Tuple/ Array typeInfo which 
> should be enough to cover all DDB supported types 
> (s,n,bool,b,ss,ns,bs,bools,m,l)
> 1- 
> https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/AttributeValue.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30388) Add support for ElementConverted open() method for KDS/KDF/DDB

2024-04-08 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834870#comment-17834870
 ] 

Ahmed Hamdy commented on FLINK-30388:
-

[~danny.cranmer] [~liangtl] would be great if you managed to review this tiny 
[PR|https://github.com/apache/flink-connector-aws/pull/135]
 

> Add support for ElementConverted open() method for KDS/KDF/DDB
> --
>
> Key: FLINK-30388
> URL: https://issues.apache.org/jira/browse/FLINK-30388
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB, Connectors / Firehose, Connectors 
> / Kinesis
>Reporter: Danny Cranmer
>Priority: Major
>
> FLINK-29938 added support for an {{open()}} method in Async Sink 
> ElementConverter. Once flink-connector-aws upgrades to Flink 1.17 we should 
> implement this method. It was originally implemented 
> [here|https://github.com/apache/flink/pull/21265] but was yanked during the 
> [sync|FLINK-30384]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-35050) Remove Lazy Initialization of DynamoDbBeanElementConverter

2024-04-08 Thread Ahmed Hamdy (Jira)


[ https://issues.apache.org/jira/browse/FLINK-35050 ]


Ahmed Hamdy deleted comment on FLINK-35050:
-

was (Author: JIRAUSER280246):
[~danny.cranmer] [~liangtl] would be great if you managed to review this tiny 
[PR|https://github.com/apache/flink-connector-aws/pull/135 ]


> Remove Lazy Initialization of DynamoDbBeanElementConverter
> --
>
> Key: FLINK-35050
> URL: https://issues.apache.org/jira/browse/FLINK-35050
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Minor
>  Labels: pull-request-available
>
> h2. Description
> {{DynamoDbBeanElementConverter}} implements {{ElementConverter}} which now 
> supports open method as of FLINK-29938, we need to remove lazy initialization 
> given that it is now unblocked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35050) Remove Lazy Initialization of DynamoDbBeanElementConverter

2024-04-08 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834862#comment-17834862
 ] 

Ahmed Hamdy commented on FLINK-35050:
-

So apparently this is a Duplicate of 
https://issues.apache.org/jira/browse/FLINK-30388 
We can close this one, I will update the PR to address the original issue

> Remove Lazy Initialization of DynamoDbBeanElementConverter
> --
>
> Key: FLINK-35050
> URL: https://issues.apache.org/jira/browse/FLINK-35050
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Minor
>  Labels: pull-request-available
>
> h2. Description
> {{DynamoDbBeanElementConverter}} implements {{ElementConverter}} which now 
> supports open method as of FLINK-29938, we need to remove lazy initialization 
> given that it is now unblocked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-35050) Remove Lazy Initialization of DynamoDbBeanElementConverter

2024-04-08 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-35050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-35050:

Description: 
h2. Description

{{DynamoDbBeanElementConverter}} implements {{ElementConverter}} which now 
supports open method as of FLINK-29938, we need to remove lazy initialization 
given that it is now unblocked.

  was:
h2. Description

{{ DynamoDbBeanElementConverter }} implements {{ ElementConverter }} which now 
supports open method as of FLINK-29938, we need to remove lazy initialization 
given that it is now unblocked.


> Remove Lazy Initialization of DynamoDbBeanElementConverter
> --
>
> Key: FLINK-35050
> URL: https://issues.apache.org/jira/browse/FLINK-35050
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Minor
>  Labels: pull-request-available
>
> h2. Description
> {{DynamoDbBeanElementConverter}} implements {{ElementConverter}} which now 
> supports open method as of FLINK-29938, we need to remove lazy initialization 
> given that it is now unblocked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35050) Remove Lazy Initialization of DynamoDbBeanElementConverter

2024-04-08 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834853#comment-17834853
 ] 

Ahmed Hamdy commented on FLINK-35050:
-

[~danny.cranmer] [~liangtl] would be great if you managed to review this tiny 
[PR|https://github.com/apache/flink-connector-aws/pull/135 ]


> Remove Lazy Initialization of DynamoDbBeanElementConverter
> --
>
> Key: FLINK-35050
> URL: https://issues.apache.org/jira/browse/FLINK-35050
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Minor
>  Labels: pull-request-available
>
> h2. Description
> {{DynamoDbBeanElementConverter}} implements {{ElementConverter}} which now 
> supports open method as of FLINK-29938, we need to remove lazy initialization 
> given that it is now unblocked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35050) Remove Lazy Initialization of DynamoDbBeanElementConverter

2024-04-08 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-35050:
---

 Summary: Remove Lazy Initialization of DynamoDbBeanElementConverter
 Key: FLINK-35050
 URL: https://issues.apache.org/jira/browse/FLINK-35050
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / DynamoDB
Affects Versions: aws-connector-4.3.0
Reporter: Ahmed Hamdy


h2. Description

{{ DynamoDbBeanElementConverter }} implements {{ ElementConverter }} which now 
supports open method as of FLINK-29938, we need to remove lazy initialization 
given that it is now unblocked.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-35022) Add TypeInformed Element Converter for DynamoDbSink

2024-04-06 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-35022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834533#comment-17834533
 ] 

Ahmed Hamdy commented on FLINK-35022:
-

[~danny.cranmer] Wdyt about this proposal? 
Additionally could please assign it to myself

> Add TypeInformed Element Converter for DynamoDbSink
> ---
>
> Key: FLINK-35022
> URL: https://issues.apache.org/jira/browse/FLINK-35022
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / DynamoDB
>Affects Versions: aws-connector-4.3.0
>Reporter: Ahmed Hamdy
>Priority: Minor
>
> h2. Context
> {{DynamoDbSink}} as an extentsion of {{AsyncSinkBase}} depends on 
> {{org.apache.flink.connector.base.sink.writer.ElementConverter}} to convert 
> Flink stream objects to DynamoDb write requests, where item is represented as 
> {{Map}}.
> {{AttributeValue}} is the wrapper for the DynamoDb comprehendable Object in a 
> format similar with type identification properties as in
> {M": {"Name" : {"S": Joe }, "Age" : {"N": 35 }}}.
> Since TypeInformation is already natively supported in Flink, many 
> implementations of the DynamoDb ElementConverted is just a boiler plate. 
> For example 
> {code:title="Simple POJO Element Conversion"}
>  public class Order {
> String id;
> int quantity;
> double total;
> }
> {code}
> The implementation of the converter must be 
> {code:title="Simple POJO DDB Element Converter"}
> public static class SimplePojoElementConverter implements 
> ElementConverter {
> @Override
> public DynamoDbWriteRequest apply(Order order, SinkWriter.Context 
> context) {
> Map itemMap = new HashMap<>();
> itemMap.put("id", AttributeValue.builder().s(order.id).build());
> itemMap.put("quantity", 
> AttributeValue.builder().n(String.valueOf(order.quantity)).build());
> itemMap.put("total", 
> AttributeValue.builder().n(String.valueOf(order.total)).build());
> return DynamoDbWriteRequest.builder()
> .setType(DynamoDbWriteRequestType.PUT)
> .setItem(itemMap)
> .build();
> }
> @Override
> public void open(Sink.InitContext context) {
> 
> }
> }
> {code}
> while this might not be too much of work, however it is a fairly common case 
> in Flink and this implementation requires some fair knowledge of DDB model 
> for new users.
> h2. Proposal 
> Introduce {{ DynamoDbTypeInformedElementConverter}} as follows:
> {code:title="TypeInformedElementconverter"} 
> public class DynamoDbTypeInformedElementConverter implements 
> ElementConverter {
> DynamoDbTypeInformedElementConverter(CompositeType typeInfo);
> public DynamoDbWriteRequest convertElement(input) {
> switch this.typeInfo{
> case: BasicTypeInfo.STRING_TYPE_INFO: return input -> 
> AttributeValue.fromS(o.toString())
> case: BasicTypeInfo.SHORT_TYPE_INFO: 
> case: BasicTypeInfo.INTEGER_TYPE_INFO: input -> 
> AttributeValue.fromN(o.toString())
>case: TupleTypeInfo: input -> AttributeValue.fromL(converTuple(input))
>   .
> }
> }
> }
> // User Code
> public static void main(String []args) {
>   DynamoDbTypeInformedElementConverter elementConverter = new 
> DynamoDbTypeInformedElementConverter(TypeInformation.of(Order.class));
> DdbSink.setElementConverter(elementConverter); 
> }
> {code}
> We will start by supporting all Pojo/ basic/ Tuple/ Array typeInfo which 
> should be enough to cover all DDB supported types 
> (s,n,bool,b,ss,ns,bs,bools,m,l)
> 1- 
> https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/AttributeValue.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-35022) Add TypeInformed Element Converter for DynamoDbSink

2024-04-06 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-35022:
---

 Summary: Add TypeInformed Element Converter for DynamoDbSink
 Key: FLINK-35022
 URL: https://issues.apache.org/jira/browse/FLINK-35022
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / DynamoDB
Affects Versions: aws-connector-4.3.0
Reporter: Ahmed Hamdy


h2. Context
{{DynamoDbSink}} as an extentsion of {{AsyncSinkBase}} depends on 
{{org.apache.flink.connector.base.sink.writer.ElementConverter}} to convert 
Flink stream objects to DynamoDb write requests, where item is represented as 
{{Map}}.

{{AttributeValue}} is the wrapper for the DynamoDb comprehendable Object in a 
format similar with type identification properties as in
{M": {"Name" : {"S": Joe }, "Age" : {"N": 35 }}}.

Since TypeInformation is already natively supported in Flink, many 
implementations of the DynamoDb ElementConverted is just a boiler plate. 
For example 
{code:title="Simple POJO Element Conversion"}
 public class Order {
String id;
int quantity;
double total;
}
{code}

The implementation of the converter must be 

{code:title="Simple POJO DDB Element Converter"}
public static class SimplePojoElementConverter implements 
ElementConverter {

@Override
public DynamoDbWriteRequest apply(Order order, SinkWriter.Context 
context) {
Map itemMap = new HashMap<>();
itemMap.put("id", AttributeValue.builder().s(order.id).build());
itemMap.put("quantity", 
AttributeValue.builder().n(String.valueOf(order.quantity)).build());
itemMap.put("total", 
AttributeValue.builder().n(String.valueOf(order.total)).build());
return DynamoDbWriteRequest.builder()
.setType(DynamoDbWriteRequestType.PUT)
.setItem(itemMap)
.build();
}

@Override
public void open(Sink.InitContext context) {

}
}
{code}

while this might not be too much of work, however it is a fairly common case in 
Flink and this implementation requires some fair knowledge of DDB model for new 
users.

h2. Proposal 

Introduce {{ DynamoDbTypeInformedElementConverter}} as follows:

{code:title="TypeInformedElementconverter"} 
public class DynamoDbTypeInformedElementConverter implements 
ElementConverter {
DynamoDbTypeInformedElementConverter(CompositeType typeInfo);
public DynamoDbWriteRequest convertElement(input) {
switch this.typeInfo{
case: BasicTypeInfo.STRING_TYPE_INFO: return input -> 
AttributeValue.fromS(o.toString())
case: BasicTypeInfo.SHORT_TYPE_INFO: 
case: BasicTypeInfo.INTEGER_TYPE_INFO: input -> 
AttributeValue.fromN(o.toString())
   case: TupleTypeInfo: input -> AttributeValue.fromL(converTuple(input))
  .
}
}
}

// User Code
public static void main(String []args) {
  DynamoDbTypeInformedElementConverter elementConverter = new 
DynamoDbTypeInformedElementConverter(TypeInformation.of(Order.class));
DdbSink.setElementConverter(elementConverter); 
}

{code}

We will start by supporting all Pojo/ basic/ Tuple/ Array typeInfo which should 
be enough to cover all DDB supported types (s,n,bool,b,ss,ns,bs,bools,m,l)

1- 
https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/dynamodb/model/AttributeValue.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34044) Kinesis Sink Cannot be Created via TableDescriptor

2024-03-13 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825972#comment-17825972
 ] 

Ahmed Hamdy commented on FLINK-34044:
-

[~tilman151] Thanks for raising this and for the thorough bug report, I was 
able to easily reproduce this.

[~dannycranmer] I have submitted a PR for it, could you please take a look.

> Kinesis Sink Cannot be Created via TableDescriptor
> --
>
> Key: FLINK-34044
> URL: https://issues.apache.org/jira/browse/FLINK-34044
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: aws-connector-4.2.0
>Reporter: Tilman Krokotsch
>Priority: Major
>  Labels: pull-request-available
>
> When trying to create a Kinesis Stream Sink in Table API via a 
> TableDescriptor I get an error:
> {code:java}
> Caused by: java.lang.UnsupportedOperationException
>     at 
> java.base/java.util.Collections$UnmodifiableMap.remove(Collections.java:1460)
>     at 
> org.apache.flink.connector.kinesis.table.util.KinesisStreamsConnectorOptionsUtils$KinesisProducerOptionsMapper.removeMappedOptions(KinesisStreamsConnectorOptionsUtils.java:249)
>     at 
> org.apache.flink.connector.kinesis.table.util.KinesisStreamsConnectorOptionsUtils$KinesisProducerOptionsMapper.mapDeprecatedClientOptions(KinesisStreamsConnectorOptionsUtils.java:158)
>     at 
> org.apache.flink.connector.kinesis.table.util.KinesisStreamsConnectorOptionsUtils.(KinesisStreamsConnectorOptionsUtils.java:90)
>     at 
> org.apache.flink.connector.kinesis.table.KinesisDynamicTableSinkFactory.createDynamicTableSink(KinesisDynamicTableSinkFactory.java:61)
>     at 
> org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:267)
>     ... 20 more
> {code}
> Here is a minimum reproducing example with Flink-1.17.2 and 
> flink-connector-kinesis-4.2.0:
> {code:java}
> public class Job {
>   public static void main(String[] args) throws Exception {
> // create data stream environment
> StreamExecutionEnvironment sEnv = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> sEnv.setRuntimeMode(RuntimeExecutionMode.STREAMING);
> StreamTableEnvironment tEnv = StreamTableEnvironment.create(sEnv);
> Schema a = Schema.newBuilder().column("a", DataTypes.STRING()).build();
> tEnv.createTemporaryTable(
> "exampleTable", 
> TableDescriptor.forConnector("datagen").schema(a).build());
> TableDescriptor descriptor =
> TableDescriptor.forConnector("kinesis")
> .schema(a)
> .format("json")
> .option("stream", "abc")
> .option("aws.region", "eu-central-1")
> .build();
> tEnv.createTemporaryTable("sinkTable", descriptor);
> tEnv.from("exampleTable").executeInsert("sinkTable"); // error occurs here
>   }
> } {code}
> From my investigation, the error is triggered by the `ResolvedCatalogTable` 
> used when re-mapping the deprecated Kinesis options in 
> `KinesisProducerOptionsMapper`. The `getOptions` method of the table returns 
> an `UnmodifiableMap` which is not mutable.
> If the sink table is created via SQL, the error does not occur:
> {code:java}
> tEnv.executeSql("CREATE TABLE sinkTable " + descriptor.toString());
> {code}
> because `ResolvedCatalogTable.getOptions` returns a regular `HashMap`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-03-12 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825567#comment-17825567
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

[~lincoln.86xy] [~mapohl] 
Hello, could you please review the 
[PR|https://github.com/apache/flink/pull/24481].
Let me add some context

h2.Why is the test failing?

So the flakiness arises from setting processing time within the test to trigger 
the timer flush of the writer, This caused the concurrent thread access of the 
mailbox which caused the failure, The problem was within the test not the 
AsyncWriter. 

h2.Why is it intermittent?

This is because we are also writing batches of records so there was a race 
condition between both batch size trigger and timer trigger, in other words we 
used to add a new batch and a set the time to trigger the flush, had the batch 
trigger flushed the buffer the timer callback would be discarded safely.
h2.Why do I believe this refactor should fix the test?
Because I have removed the time setting from the test it self, The size of 
batches sent should be enough to trigger the flush which is needed for the test.

h2.What could go wrong?

There should be no newly introduced issues here since the batch size is 
unchanged we expect enough flushes triggered by batch size only to stabilize 
the rate limiting value as expected.

h2.How did I verify the fix?

I have run a sampler till failure for a some time and haven't reported any. I 
am aware local setup is different than CI but the test should be less sensitive 
to delays now so I expect we are green to go.

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0, 1.20.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-03-11 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825309#comment-17825309
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

[~lincoln.86xy] yes, I will publish fixes for 1.19 and 1.18 after we merge it 
for master (1.20)

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0, 1.20.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.20.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-03-11 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825202#comment-17825202
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

[~mapohl] Sorry I was out of office, Since I am back now I am happy to take a 
look and not disable the test. I will post updates on the ticket.

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0, 1.20.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-30 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1781#comment-1781
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

> Sure, if we are certain that this is a test issue and not an issue that was 
> introduced with 1.19?!

the stacktrace shows that the issue is that the timer is triggered by the test 
itself, so It is unlikely it is an issue from the sinkwriter. 
I will make sure to double check the impact as well. 

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>     

[jira] [Comment Edited] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-23 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809902#comment-17809902
 ] 

Ahmed Hamdy edited comment on FLINK-31472 at 1/23/24 11:23 AM:
---

[~mapohl]

Could we merge this [PR|https://github.com/apache/flink/pull/24175] while the 
investigation is going? I don't want to block the pipeline further.


was (Author: JIRAUSER280246):
[~mapohl]

Could we merge this [PR](https://github.com/apache/flink/pull/24175) while the 
investigation is going? I don't want to block the pipeline further.

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-23 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17809902#comment-17809902
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

[~mapohl]

Could we merge this [PR](https://github.com/apache/flink/pull/24175) while the 
investigation is going? I don't want to block the pipeline further.

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-16 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807293#comment-17807293
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

[~mapohl] unfortunately not so far,  I have taken a quick look  but will try to 
spend more time on it this week. 
I will update the ticket by the end of the week If I hadn't reached a fix so we 
can disable the test temporarily given the current frequency. 


> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.18.0, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> 

[jira] [Updated] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2024-01-09 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-27756:

Affects Version/s: (was: 1.15.0)

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2024-01-09 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804776#comment-17804776
 ] 

Ahmed Hamdy commented on FLINK-27756:
-

Hi [~Sergey Nuyanzin]
The tests originally assumed an upperbound for sending times as configured by 
delay, however it discarded the delay induced by mailbox which is apparently 
higher on CI than local. I rewrote the test to fix this miss.
It is hard to reproduce locally since mailbox delay seems much lighter but I 
resampled the test and was able to detect a rare instance of the failure prior 
to the fix. 

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-08 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804232#comment-17804232
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

That's weird, [~Sergey Nuyanzin] I will have a look today

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
>         at 
> 

[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2024-01-01 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17801560#comment-17801560
 ] 

Ahmed Hamdy commented on FLINK-27756:
-

[~Sergey Nuyanzin] I have refactored the test, please take a look at this 
([PR|https://github.com/apache/flink/pull/24014])

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2024-01-01 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-27756:

Fix Version/s: 1.19.0
   (was: 1.16.0)

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2023-12-19 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-27756:

Labels: test-stability  (was: pull-request-available stale-assigned 
test-stability)

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2023-12-18 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798256#comment-17798256
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

[~Sergey Nuyanzin] I have pushed a PR with the fix, would be great if you 
reviewed it.
I am going to rebase the fix on 1.17, 1.18 once this is merged.

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2023-12-18 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17798101#comment-17798101
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

hi [~Sergey Nuyanzin] could you assign me the ticket, I will take a look ASAP.

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.19.0
>Reporter: Ran Tao
>Priority: Major
>  Labels: test-stability
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
>         at 
> 

[jira] [Commented] (FLINK-33181) Table using `kinesis` connector can not be used for both read & write operations if it's defined with unsupported sink property

2023-10-16 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775570#comment-17775570
 ] 

Ahmed Hamdy commented on FLINK-33181:
-

Hi [~khanhvu], Could you please describe the use case more. It feels like an 
anti-pattern to use the same stream as source and sink. Kinesis Table API 
source and sink implementations were intentionally separated post 1.15.

> Table using `kinesis` connector can not be used for both read & write 
> operations if it's defined with unsupported sink property
> ---
>
> Key: FLINK-33181
> URL: https://issues.apache.org/jira/browse/FLINK-33181
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis, Table SQL / Runtime
>Affects Versions: 1.15.4, aws-connector-4.1.0
>Reporter: Khanh Vu
>Assignee: Khanh Vu
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.2.0
>
>
> First, I define a table which uses `kinesis` connector with an unsupported 
> property for sink, e.g. `scan.stream.initpos`:
> {code:sql}
> %flink.ssql(type=update)
> – Create input
> DROP TABLE IF EXISTS `kds_input`;
> CREATE TABLE `kds_input` (
> `some_string` STRING,
> `some_int` BIGINT,
> `time` AS PROCTIME()
> ) WITH (
> 'connector' = 'kinesis',
> 'stream' = 'ExampleInputStream',
> 'aws.region' = 'us-east-1',
> 'scan.stream.initpos' = 'LATEST',
> 'format' = 'csv'
> );
> {code}
> I can read from my table (kds_input) without any issue, but it will throw 
> exception if I try to write to the table:
> {code:sql}
> %flink.ssql(type=update)
> – Use to generate data in the input table
> DROP TABLE IF EXISTS connector_cve_datagen;
> CREATE TABLE connector_cve_datagen(
> `some_string` STRING,
> `some_int` BIGINT
> ) WITH (
> 'connector' = 'datagen',
> 'rows-per-second' = '1',
> 'fields.some_string.length' = '2');
> INSERT INTO kds_input SELECT some_string, some_int from connector_cve_datagen
> {code}
> Exception observed:
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Unsupported 
> options found for 'kinesis'.
> Unsupported options:
> scan.stream.initpos
> Supported options:
> aws.region
> connector
> csv.allow-comments
> csv.array-element-delimiter
> csv.disable-quote-character
> csv.escape-character
> csv.field-delimiter
> csv.ignore-parse-errors
> csv.null-literal
> csv.quote-character
> format
> property-version
> sink.batch.max-size
> sink.fail-on-error
> sink.flush-buffer.size
> sink.flush-buffer.timeout
> sink.partitioner
> sink.partitioner-field-delimiter
> sink.producer.collection-max-count (deprecated)
> sink.producer.collection-max-size (deprecated)
> sink.producer.fail-on-error (deprecated)
> sink.producer.record-max-buffered-time (deprecated)
> sink.requests.max-buffered
> sink.requests.max-inflight
> stream
> at 
> org.apache.flink.table.factories.FactoryUtil.validateUnconsumedKeys(FactoryUtil.java:624)
> at 
> org.apache.flink.table.factories.FactoryUtil$FactoryHelper.validate(FactoryUtil.java:914)
> at 
> org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.validate(FactoryUtil.java:978)
> at 
> org.apache.flink.table.factories.FactoryUtil$FactoryHelper.validateExcept(FactoryUtil.java:938)
> at 
> org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.validateExcept(FactoryUtil.java:978)
> at 
> org.apache.flink.connector.kinesis.table.KinesisDynamicTableSinkFactory.createDynamicTableSink(KinesisDynamicTableSinkFactory.java:65)
> at 
> org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:259)
> ... 36 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29991) KinesisFirehoseSinkTest#firehoseSinkFailsWhenUnableToConnectToRemoteService failed

2023-08-15 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17754502#comment-17754502
 ] 

Ahmed Hamdy commented on FLINK-29991:
-

Wasn't able to reproduce the issue, do we have other occurances?

> KinesisFirehoseSinkTest#firehoseSinkFailsWhenUnableToConnectToRemoteService 
> failed 
> ---
>
> Key: FLINK-29991
> URL: https://issues.apache.org/jira/browse/FLINK-29991
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.2
>Reporter: Martijn Visser
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> {code:java}
> Nov 10 10:22:53 [ERROR] 
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkTest.firehoseSinkFailsWhenUnableToConnectToRemoteService
>   Time elapsed: 7.394 s  <<< FAILURE!
> Nov 10 10:22:53 java.lang.AssertionError: 
> Nov 10 10:22:53 
> Nov 10 10:22:53 Expecting throwable message:
> Nov 10 10:22:53   "An OperatorEvent from an OperatorCoordinator to a task was 
> lost. Triggering task failover to ensure consistency. Event: 
> '[NoMoreSplitEvent]', targetTask: Source: Sequence Source -> Map -> Map -> 
> Sink: Writer (15/32) - execution #0"
> Nov 10 10:22:53 to contain:
> Nov 10 10:22:53   "Received an UnknownHostException when attempting to 
> interact with a service."
> Nov 10 10:22:53 but did not.
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43017=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=44513



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2023-07-11 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17741989#comment-17741989
 ] 

Ahmed Hamdy commented on FLINK-27756:
-

[~Sergey Nuyanzin]thanks for reopening, I will try to take a look at it this 
week.

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32229) Implement metrics and logging for Initial implementation

2023-07-03 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739599#comment-17739599
 ] 

Ahmed Hamdy commented on FLINK-32229:
-

I would love to work on this JIRA

> Implement metrics and logging for Initial implementation
> 
>
> Key: FLINK-32229
> URL: https://issues.apache.org/jira/browse/FLINK-32229
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Hong Liang Teoh
>Priority: Major
>
> Add/Ensure Kinesis specific metrics for MillisBehindLatest/numRecordsIn are 
> published.
> More metrics here: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-33%3A+Standardize+Connector+Metrics]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32230) Deadlock in AWS Kinesis Data Streams AsyncSink connector

2023-06-01 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728452#comment-17728452
 ] 

Ahmed Hamdy commented on FLINK-32230:
-

Hi [~antoniovespoli] thanks for reporting the issue. 
I will be looking into it as early as next week.  As you mentioned these form 
of deadlocks could be caused due to silent failures of the sdk client when 
submitting the records.
while I understand this is a hard case to reproduce, It would be great if you 
could update ticket with any info found in operation.

> Deadlock in AWS Kinesis Data Streams AsyncSink connector
> 
>
> Key: FLINK-32230
> URL: https://issues.apache.org/jira/browse/FLINK-32230
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS
>Affects Versions: 1.15.4, 1.16.2, 1.17.1
>Reporter: Antonio Vespoli
>Priority: Major
> Fix For: aws-connector-3.1.0, aws-connector-4.2.0
>
>
> Connector calls to AWS Kinesis Data Streams can hang indefinitely without 
> making any progress.
> We suspect the root cause to be related to the SDK handling of exceptions, 
> similarly to what observed in FLINK-31675.
> We identified this deadlock on applications running on AWS Kinesis Data 
> Analytics using the AWS Kinesis Data Streams AsyncSink (with AWS SDK version 
> 2.20.32 as per FLINK-31675). The deadlock scenario is still the same as 
> described in FLINK-31675. However, the Netty content-length exception does 
> not appear when using the updated SDK version.
> This issue only occurs for applications and streams in the AWS regions 
> _ap-northeast-3_ and {_}us-gov-east-1{_}. We did not observe this issue in 
> any other AWS region.
> The issue happens sporadically and unpredictably. As per its nature, we do 
> not have instructions for reproducing it.
> Example of flame-graphs observed when the issue occurs:
> {code:java}
> root
> java.lang.Thread.run:829
> org.apache.flink.runtime.taskmanager.Task.run:568
> org.apache.flink.runtime.taskmanager.Task.doRun:746
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke:932
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring:953
> org.apache.flink.runtime.taskmanager.Task$$Lambda$1253/0x000800ecbc40.run:-1
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke:753
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop:804
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop:203
> org.apache.flink.streaming.runtime.tasks.StreamTask$$Lambda$907/0x000800bf7840.runDefaultAction:-1
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput:519
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput:65
> org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext:110
> org.apache.flink.streaming.runtime.io.checkpointing.CheckpointedInputGate.pollNext:159
> org.apache.flink.streaming.runtime.io.checkpointing.CheckpointedInputGate.handleEvent:181
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.processBarrier:231
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.markCheckpointAlignedAndTransformState:262
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler$$Lambda$1586/0x0008012c5c40.apply:-1
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.lambda$processBarrier$2:234
> org.apache.flink.streaming.runtime.io.checkpointing.AbstractAlignedBarrierHandlerState.barrierReceived:66
> org.apache.flink.streaming.runtime.io.checkpointing.AbstractAlignedBarrierHandlerState.triggerGlobalCheckpoint:74
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler$ControllerImpl.triggerGlobalCheckpoint:493
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.access$100:64
> org.apache.flink.streaming.runtime.io.checkpointing.SingleCheckpointBarrierHandler.triggerCheckpoint:287
> org.apache.flink.streaming.runtime.io.checkpointing.CheckpointBarrierHandler.notifyCheckpoint:147
> org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpointOnBarrier:1198
> org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint:1241
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing:50
> org.apache.flink.streaming.runtime.tasks.StreamTask$$Lambda$1453/0x00080128e840.run:-1
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$performCheckpoint$12:1253
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl.checkpointState:300
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.prepareSnapshotPreBarrier:89
> 

[jira] [Commented] (FLINK-32063) AWS CI mvn compile fails to cast objects to parent type.

2023-05-12 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17722072#comment-17722072
 ] 

Ahmed Hamdy commented on FLINK-32063:
-

So apparently the CI patches the changes to main before running tests.
The issue was due to an un-rebased change from main.
Even though this is an untraditional way for running CI this is not an issue.
Please close as "not an issue".

> AWS CI mvn compile fails to cast objects to parent type.
> 
>
> Key: FLINK-32063
> URL: https://issues.apache.org/jira/browse/FLINK-32063
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / AWS, Tests
>Reporter: Ahmed Hamdy
>Priority: Minor
>  Labels: test-stability
>
> h2. Description
> AWS Connectors CI fails to cast {{TestSinkInitContext}} into base type 
> {{InitContext}},
> - Failure
> https://github.com/apache/flink-connector-aws/actions/runs/4924790308/jobs/8841458606?pr=70
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32063) AWS CI mvn compile fails to cast objects to parent type.

2023-05-11 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-32063:
---

 Summary: AWS CI mvn compile fails to cast objects to parent type.
 Key: FLINK-32063
 URL: https://issues.apache.org/jira/browse/FLINK-32063
 Project: Flink
  Issue Type: Bug
  Components: Connectors / AWS, Tests
Reporter: Ahmed Hamdy


h2. Description

AWS Connectors CI fails to cast {{TestSinkInitContext}} into base type 
{{InitContext}},

- Failure
https://github.com/apache/flink-connector-aws/actions/runs/4924790308/jobs/8841458606?pr=70
 





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32007) Implement Python Wrappers for DynamoDB Connector

2023-05-04 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-32007:
---

 Summary: Implement Python Wrappers for DynamoDB Connector
 Key: FLINK-32007
 URL: https://issues.apache.org/jira/browse/FLINK-32007
 Project: Flink
  Issue Type: New Feature
  Components: API / Python, Connectors / DynamoDB
Reporter: Ahmed Hamdy
 Fix For: 1.18.0


Implement Python API Wrappers for DynamoDB Sink



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31998) Flink Operator Deadlock on run job Failure

2023-05-04 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31998:

Description: 
h2. Description

FlinkOperator Reconciler goes into deadlock situation where it never udpates 
Session job to DEPLOYED/ROLLED_BACK if {{deploy}} fails.
Attached sequence diagram of the issue where FlinkSessionJob is stuck in 
UPGRADING indefinitely.
h2. proposed fix

Reconciler should roll back changes CR if 
{{reconciliationStatus.isBeforeFirstDeployment()}} fails to {{{}deploy(){}}}.
[diagram|https://issues.apache.org/7239bb39-60d8-48a0-9052-f3231947edbe]

  was:
h2. Description
FlinkOperator Reconciler goes into deadlock situation where it never udpates 
Session job to DEPLOYED if {{deploy}} fails.
 Attached sequence diagram of the issue where FlinkSessionJob is stuck in 
UPGRADING indefinitely.
h2. proposed fix
Reconciler should roll back changes CR if 
{{reconciliationStatus.isBeforeFirstDeployment()}} fails to {{deploy()}}.
[diagram|https://issues.apache.org/7239bb39-60d8-48a0-9052-f3231947edbe]



> Flink Operator Deadlock on run job Failure
> --
>
> Key: FLINK-31998
> URL: https://issues.apache.org/jira/browse/FLINK-31998
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.2.0, kubernetes-operator-1.3.0, 
> kubernetes-operator-1.4.0
>Reporter: Ahmed Hamdy
>Priority: Major
> Fix For: kubernetes-operator-1.5.0
>
> Attachments: gleek-m6pLe3Wy--IpCKQavAQwBQ.png
>
>
> h2. Description
> FlinkOperator Reconciler goes into deadlock situation where it never udpates 
> Session job to DEPLOYED/ROLLED_BACK if {{deploy}} fails.
> Attached sequence diagram of the issue where FlinkSessionJob is stuck in 
> UPGRADING indefinitely.
> h2. proposed fix
> Reconciler should roll back changes CR if 
> {{reconciliationStatus.isBeforeFirstDeployment()}} fails to {{{}deploy(){}}}.
> [diagram|https://issues.apache.org/7239bb39-60d8-48a0-9052-f3231947edbe]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31998) Flink Operator Deadlock on run job Failure

2023-05-04 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719258#comment-17719258
 ] 

Ahmed Hamdy commented on FLINK-31998:
-

CC [~gyfora]

> Flink Operator Deadlock on run job Failure
> --
>
> Key: FLINK-31998
> URL: https://issues.apache.org/jira/browse/FLINK-31998
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.2.0, kubernetes-operator-1.3.0, 
> kubernetes-operator-1.4.0
>Reporter: Ahmed Hamdy
>Priority: Major
> Fix For: kubernetes-operator-1.5.0
>
> Attachments: gleek-m6pLe3Wy--IpCKQavAQwBQ.png
>
>
> h2. Description
> FlinkOperator Reconciler goes into deadlock situation where it never udpates 
> Session job to DEPLOYED/ROLLED_BACK if {{deploy}} fails.
> Attached sequence diagram of the issue where FlinkSessionJob is stuck in 
> UPGRADING indefinitely.
> h2. proposed fix
> Reconciler should roll back changes CR if 
> {{reconciliationStatus.isBeforeFirstDeployment()}} fails to {{{}deploy(){}}}.
> [diagram|https://issues.apache.org/7239bb39-60d8-48a0-9052-f3231947edbe]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31998) Flink Operator Deadlock on run job Failure

2023-05-04 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-31998:
---

 Summary: Flink Operator Deadlock on run job Failure
 Key: FLINK-31998
 URL: https://issues.apache.org/jira/browse/FLINK-31998
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.4.0, kubernetes-operator-1.3.0, 
kubernetes-operator-1.2.0
Reporter: Ahmed Hamdy
 Fix For: kubernetes-operator-1.5.0
 Attachments: gleek-m6pLe3Wy--IpCKQavAQwBQ.png

h2. Description
FlinkOperator Reconciler goes into deadlock situation where it never udpates 
Session job to DEPLOYED if {{deploy}} fails.
 Attached sequence diagram of the issue where FlinkSessionJob is stuck in 
UPGRADING indefinitely.
h2. proposed fix
Reconciler should roll back changes CR if 
{{reconciliationStatus.isBeforeFirstDeployment()}} fails to {{deploy()}}.
[diagram|https://issues.apache.org/7239bb39-60d8-48a0-9052-f3231947edbe]




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-23 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17715483#comment-17715483
 ] 

Ahmed Hamdy commented on FLINK-31872:
-

CC [~danny.cranmer] 

> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> As part of FLINK-31772
> I performed a complete benchmark for {{KinesisStreamsSink}} after configuring 
> rate limiting strategy.
> It appears that optimum values for rate limiting strategy parameters are 
> dependent on use case (shard number/ parallellism/ record thouroughput)
> We initially implemeted the {{AIMDRateLimitingStrategy}} in accordance with 
> one used for TCP congestion control but since parameters are use case 
> dependent we would like to allow sink users to adjust parameters as suitable.
> h2. Requirements
>  - we *must* allow users to configure increment rate and decrease factor of 
> AIMDRateLimitingStrategy for {{KinesisStreamsSink}}
>  - we *must* provide backward compatible default values identical to current 
> values to introduce no further regressions.
> h2. Appendix
> h3. Performace Benchmark Results
> |Parallelism/Shards/Payload|paralellism|shards|payload|records/sec|Async 
> Sink|Async Sink With Configured Ratelimiting Strategy Thourouput (MB/s)|Async 
> sink/ Maximum Thourouput|% of Improvement|
> |Low/Low/Low|1|1|1024|1|0.991|1|1|0.9|
> |Low/Low/High|1|1|102400|100|0.9943|1|1|0.57|
> |Low/Med/Low|1|8|1024|8|4.12|4.57|0.57125|5.625|
> |Low/Med/High|1|8|102400|800|4.35|7.65|0.95625|41.25|
> |Med/Low/Low|8|1|1024|2|0.852|0.846|0.846|-0.6|
> |Med/Low/High|8|1|102400|200|0.921|0.867|0.867|-5.4|
> |Med/Med/Low|8|8|1024|8|5.37|4.76|0.595|-7.625|
> |Med/Med/High|8|8|102400|800|7.53|7.69|0.96125|2|
> |Med/High/Low|8|64|1024|8|32.5|37.4|0.58438|7.65625|
> |Med/High/High|8|64|102400|800|47.27|60.4|0.94375|20.51562|
> |High/High/Low|256|256|1024|30|127|127|0.49609|0|
> |High/High/High|256|256|102400|3000|225|246|0.96094|8.20313|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Description: 
h1. Issue

As part of FLINK-31772

I performed a complete benchmark for {{KinesisStreamsSink}} after configuring 
rate limiting strategy.
It appears that optimum values for rate limiting strategy parameters are 
dependent on use case (shard number/ parallellism/ record thouroughput)
We initially implemeted the {{AIMDRateLimitingStrategy}} in accordance with one 
used for TCP congestion control but since parameters are use case dependent we 
would like to allow sink users to adjust parameters as suitable.
h2. Requirements
 - we *must* allow users to configure increment rate and decrease factor of 
AIMDRateLimitingStrategy for {{KinesisStreamsSink}}
 - we *must* provide backward compatible default values identical to current 
values to introduce no further regressions.

h2. Appendix
h3. Performace Benchmark Results
|Parallelism/Shards/Payload|paralellism|shards|payload|records/sec|Async 
Sink|Async Sink With Configured Ratelimiting Strategy Thourouput (MB/s)|Async 
sink/ Maximum Thourouput|% of Improvement|
|Low/Low/Low|1|1|1024|1|0.991|1|1|0.9|
|Low/Low/High|1|1|102400|100|0.9943|1|1|0.57|
|Low/Med/Low|1|8|1024|8|4.12|4.57|0.57125|5.625|
|Low/Med/High|1|8|102400|800|4.35|7.65|0.95625|41.25|
|Med/Low/Low|8|1|1024|2|0.852|0.846|0.846|-0.6|
|Med/Low/High|8|1|102400|200|0.921|0.867|0.867|-5.4|
|Med/Med/Low|8|8|1024|8|5.37|4.76|0.595|-7.625|
|Med/Med/High|8|8|102400|800|7.53|7.69|0.96125|2|
|Med/High/Low|8|64|1024|8|32.5|37.4|0.58438|7.65625|
|Med/High/High|8|64|102400|800|47.27|60.4|0.94375|20.51562|
|High/High/Low|256|256|1024|30|127|127|0.49609|0|
|High/High/High|256|256|102400|3000|225|246|0.96094|8.20313|

  was:
h1. Issue

As part of FLINK-31772

I performed a complete benchmark for {{KinesisStreamsSink}} after configuring 
rate limiting strategy.
It appears that optimum values for rate limiting strategy parameters are 
dependent on use case (shard number/ parallellism/ record thouroughput)
We initially implemeted the {{AIMDRateLimitingStrategy}} in accordance with one 
used for TCP congestion control but since parameters are use case dependent we 
would like to allow sink users to adjust parameters as suitable. 

h2. Requirements

- we *must* allow users to configure increment rate and decrease factor of 
AIMDRateLimitingStrategy for {{KinesisStreamsSink}}
- we *must* provide backward compatible default values identical to current 
values to introduce no further regressions.


h2. Appendix

h3. Performace Benchmark Results
|Parallelism/Shards/Payload|paralellism|shards|payload|records/sec|Async 
Sink|Async Sink With Configured Ratelimiting Strategy Thourouput (MB/s)|KPL 
Thourouput (MB/s)|Async sink/ Maximum Thourouput|Async sink /KPL|% of 
Improvement|
|Low/Low/Low|1|1|1024|1|0.991|1|0.958|1|4.2|0.9|
|Low/Low/High|1|1|102400|100|0.9943|1|0.975|1|2.5|0.57|
|Low/Med/Low|1|8|1024|8|4.12|4.57|6.45|0.57125|-23.5|5.625|
|Low/Med/High|1|8|102400|800|4.35|7.65|7.45|0.95625|2.5|41.25|
|Med/Low/Low|8|1|1024|2|0.852|0.846|0.545|0.846|30.1|-0.6|
|Med/Low/High|8|1|102400|200|0.921|0.867|0.975|0.867|-10.8|-5.4|
|Med/Med/Low|8|8|1024|8|5.37|4.76|5.87|0.595|-13.875|-7.625|
|Med/Med/High|8|8|102400|800|7.53|7.69|5.95|0.96125|21.75|2|
|Med/High/Low|8|64|1024|8|32.5|37.4|40.7|0.58438|-5.15625|7.65625|
|Med/High/High|8|64|102400|800|47.27|60.4|56.27|0.94375|6.45312|20.51562|
|High/High/Low|256|256|1024|30|127|127|131|0.49609|-1.5625|0|
|High/High/High|256|256|102400|3000|225|246|215|0.96094|12.10938|8.20313|


> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> As part of FLINK-31772
> I performed a complete benchmark for {{KinesisStreamsSink}} after configuring 
> rate limiting strategy.
> It appears that optimum values for rate limiting strategy parameters are 
> dependent on use case (shard number/ parallellism/ record thouroughput)
> We initially implemeted the {{AIMDRateLimitingStrategy}} in accordance with 
> one used for TCP congestion control but since parameters are use case 
> dependent we would like to allow sink users to adjust parameters as suitable.
> h2. Requirements
>  - we *must* allow users to configure increment rate and decrease factor of 
> AIMDRateLimitingStrategy for {{KinesisStreamsSink}}
>  - we *must* provide 

[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Description: 
h1. Issue

As part of FLINK-31772

I performed a complete benchmark for {{KinesisStreamsSink}} after configuring 
rate limiting strategy.
It appears that optimum values for rate limiting strategy parameters are 
dependent on use case (shard number/ parallellism/ record thouroughput)
We initially implemeted the {{AIMDRateLimitingStrategy}} in accordance with one 
used for TCP congestion control but since parameters are use case dependent we 
would like to allow sink users to adjust parameters as suitable. 

h2. Requirements

- we *must* allow users to configure increment rate and decrease factor of 
AIMDRateLimitingStrategy for {{KinesisStreamsSink}}
- we *must* provide backward compatible default values identical to current 
values to introduce no further regressions.


h2. Appendix

h3. Performace Benchmark Results
|Parallelism/Shards/Payload|paralellism|shards|payload|records/sec|Async 
Sink|Async Sink With Configured Ratelimiting Strategy Thourouput (MB/s)|KPL 
Thourouput (MB/s)|Async sink/ Maximum Thourouput|Async sink /KPL|% of 
Improvement|
|Low/Low/Low|1|1|1024|1|0.991|1|0.958|1|4.2|0.9|
|Low/Low/High|1|1|102400|100|0.9943|1|0.975|1|2.5|0.57|
|Low/Med/Low|1|8|1024|8|4.12|4.57|6.45|0.57125|-23.5|5.625|
|Low/Med/High|1|8|102400|800|4.35|7.65|7.45|0.95625|2.5|41.25|
|Med/Low/Low|8|1|1024|2|0.852|0.846|0.545|0.846|30.1|-0.6|
|Med/Low/High|8|1|102400|200|0.921|0.867|0.975|0.867|-10.8|-5.4|
|Med/Med/Low|8|8|1024|8|5.37|4.76|5.87|0.595|-13.875|-7.625|
|Med/Med/High|8|8|102400|800|7.53|7.69|5.95|0.96125|21.75|2|
|Med/High/Low|8|64|1024|8|32.5|37.4|40.7|0.58438|-5.15625|7.65625|
|Med/High/High|8|64|102400|800|47.27|60.4|56.27|0.94375|6.45312|20.51562|
|High/High/Low|256|256|1024|30|127|127|131|0.49609|-1.5625|0|
|High/High/High|256|256|102400|3000|225|246|215|0.96094|12.10938|8.20313|

  was:
h1. Issue

As part of FLINK-31772

I performed a complete benchmark for


> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> As part of FLINK-31772
> I performed a complete benchmark for {{KinesisStreamsSink}} after configuring 
> rate limiting strategy.
> It appears that optimum values for rate limiting strategy parameters are 
> dependent on use case (shard number/ parallellism/ record thouroughput)
> We initially implemeted the {{AIMDRateLimitingStrategy}} in accordance with 
> one used for TCP congestion control but since parameters are use case 
> dependent we would like to allow sink users to adjust parameters as suitable. 
> h2. Requirements
> - we *must* allow users to configure increment rate and decrease factor of 
> AIMDRateLimitingStrategy for {{KinesisStreamsSink}}
> - we *must* provide backward compatible default values identical to current 
> values to introduce no further regressions.
> h2. Appendix
> h3. Performace Benchmark Results
> |Parallelism/Shards/Payload|paralellism|shards|payload|records/sec|Async 
> Sink|Async Sink With Configured Ratelimiting Strategy Thourouput (MB/s)|KPL 
> Thourouput (MB/s)|Async sink/ Maximum Thourouput|Async sink /KPL|% of 
> Improvement|
> |Low/Low/Low|1|1|1024|1|0.991|1|0.958|1|4.2|0.9|
> |Low/Low/High|1|1|102400|100|0.9943|1|0.975|1|2.5|0.57|
> |Low/Med/Low|1|8|1024|8|4.12|4.57|6.45|0.57125|-23.5|5.625|
> |Low/Med/High|1|8|102400|800|4.35|7.65|7.45|0.95625|2.5|41.25|
> |Med/Low/Low|8|1|1024|2|0.852|0.846|0.545|0.846|30.1|-0.6|
> |Med/Low/High|8|1|102400|200|0.921|0.867|0.975|0.867|-10.8|-5.4|
> |Med/Med/Low|8|8|1024|8|5.37|4.76|5.87|0.595|-13.875|-7.625|
> |Med/Med/High|8|8|102400|800|7.53|7.69|5.95|0.96125|21.75|2|
> |Med/High/Low|8|64|1024|8|32.5|37.4|40.7|0.58438|-5.15625|7.65625|
> |Med/High/High|8|64|102400|800|47.27|60.4|56.27|0.94375|6.45312|20.51562|
> |High/High/Low|256|256|1024|30|127|127|131|0.49609|-1.5625|0|
> |High/High/High|256|256|102400|3000|225|246|215|0.96094|12.10938|8.20313|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Description: 
h1. Issue

As part of FLINK-31772

I performed a complete benchmark for

  was:
h1. Issue

While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
{{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
regression against the deprecated sink for same environment setting.

Further investigation identified that the AIMD Ratelimiting strategy is the 
bottleneck for the regression. 

Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} and 
{KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}



h2. Environment Settings
- Benchmarking was performed on AWS KDA.
- Application logic is just sending records downstream
- Application parallelism was tested to be 1.
- Kinesis stream number of shards was tested with 8 and 12.
- payload size was 1Kb and 100Kb.

 


> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> As part of FLINK-31772
> I performed a complete benchmark for



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Affects Version/s: (was: 1.16.0)
   (was: aws-connector-3.0.0)
   (was: 1.16.1)
   (was: aws-connector-4.0.0)
   (was: aws-connector-4.1.0)

> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Labels:   (was: pull-request-available)

> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Component/s: (was: Connectors / Common)
 (was: Connectors / Firehose)

> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Fix Version/s: (was: 1.16.2)
   (was: 1.18.0)
   (was: 1.17.1)

> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, 
> aws-connector-4.0.0, aws-connector-4.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: aws-connector-4.2.0
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Issue Type: Improvement  (was: Bug)

> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, 
> aws-connector-4.0.0, aws-connector-4.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1, aws-connector-4.2.0
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31872) Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink users for KinesisStreamsSink

2023-04-20 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31872:

Summary: Add Support for Configuring AIMD Ratelimiting strategy parameters 
by Sink users for KinesisStreamsSink  (was: CLONE - AsyncSinkWriter Performance 
regression due to AIMD rate limiting strategy)

> Add Support for Configuring AIMD Ratelimiting strategy parameters by Sink 
> users for KinesisStreamsSink
> --
>
> Key: FLINK-31872
> URL: https://issues.apache.org/jira/browse/FLINK-31872
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, 
> aws-connector-4.0.0, aws-connector-4.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1, aws-connector-4.2.0
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31872) CLONE - AsyncSinkWriter Performance regression due to AIMD rate limiting strategy

2023-04-20 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-31872:
---

 Summary: CLONE - AsyncSinkWriter Performance regression due to 
AIMD rate limiting strategy
 Key: FLINK-31872
 URL: https://issues.apache.org/jira/browse/FLINK-31872
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common, Connectors / Firehose, Connectors / 
Kinesis
Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, aws-connector-4.0.0, 
aws-connector-4.1.0
Reporter: Ahmed Hamdy
Assignee: Ahmed Hamdy
 Fix For: 1.16.2, 1.18.0, 1.17.1, aws-connector-4.2.0


h1. Issue

While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
{{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
regression against the deprecated sink for same environment setting.

Further investigation identified that the AIMD Ratelimiting strategy is the 
bottleneck for the regression. 

Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} and 
{KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}



h2. Environment Settings
- Benchmarking was performed on AWS KDA.
- Application logic is just sending records downstream
- Application parallelism was tested to be 1.
- Kinesis stream number of shards was tested with 8 and 12.
- payload size was 1Kb and 100Kb.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31854) Flink connector redshift

2023-04-20 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17714682#comment-17714682
 ] 

Ahmed Hamdy commented on FLINK-31854:
-

Thanks [~samrat007] for the proposal, we should raise a discussion in the 
mailing list, happy to assist with the JIRA.

> Flink connector redshift 
> -
>
> Key: FLINK-31854
> URL: https://issues.apache.org/jira/browse/FLINK-31854
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / AWS
>Reporter: Samrat Deb
>Priority: Major
>
> Proposal :
> Add new feature (submodule) flink connector redshift in flink-connector-aws.
> This connector should be based on FLIP-27 
>  
> Next steps 
> - Create a flip for the flink connector redshift 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31772) AsyncSinkWriter Performance regression due to AIMD rate limiting strategy

2023-04-17 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713067#comment-17713067
 ] 

Ahmed Hamdy commented on FLINK-31772:
-

Thanks [~dannycranmer]

I have published a fix for the issue 
[https://github.com/apache/flink-connector-aws/pull/70]

Attaching Results of performance benchmark and KDS performance after applying 
fix in regression cases.

 

 
h2. Performace Benchmark Results

 
|Parallelism/Shards/Payload|paralellism|shards|payload|records/sec|Async 
Sink|Async Sink With Configured Ratelimiting Strategy Thourouput (MB/s)|KPL 
Thourouput (MB/s)|Percentage of Maximum Thourouput|Improvement Percentage 
against KPL|Improvement Percentage against Async Sink|
|Low/Low/Low|1|1|1024|1|0.991|1|0.958|1|4.2|0.9|
|Low/Low/High|1|1|102400|100|0.9943|1|0.975|1|2.5|0.57|
|Low/Med/Low|1|8|1024|8|4.12|4.57|6.45|0.57125|-23.5|5.625|
|Low/Med/High|1|8|102400|800|4.35|7.65|7.45|0.95625|2.5|41.25|
|Med/Low/Low|8|1|1024|2|0.852|0.846|0.545|0.846|30.1|-0.6|
|Med/Low/High|8|1|102400|200|0.921|0.867|0.975|0.867|-10.8|-5.4|
|Med/Med/Low|8|8|1024|8|5.37|4.76|5.87|0.595|-13.875|-7.625|
|Med/Med/High|8|8|102400|800|7.53|7.69|5.95|0.96125|21.75|2|
|Med/High/Low|8|64|1024|8|32.5|37.4|40.7|0.58438|-5.15625|7.65625|
|Med/High/High|8|64|102400|800|47.27|60.4|56.27|0.94375|6.45312|20.51562|
|High/High/Low|256|256|1024|30|127|127|131|0.49609|-1.5625|0|
|High/High/High|256|256|102400|3000|225|246|215|0.96094|12.10938|8.20313|

 

 
h2. KDS Sink performance after applying fix


!Screenshot 2023-04-17 at 13.03.34.png!




!Screenshot 2023-04-17 at 13.02.31.png!

> AsyncSinkWriter Performance regression due to AIMD rate limiting strategy
> -
>
> Key: FLINK-31772
> URL: https://issues.apache.org/jira/browse/FLINK-31772
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, 
> aws-connector-4.0.0, aws-connector-4.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1, aws-connector-4.2.0
>
> Attachments: Screenshot 2023-04-11 at 12.56.10.png, Screenshot 
> 2023-04-11 at 12.58.09.png, Screenshot 2023-04-11 at 13.01.47.png, Screenshot 
> 2023-04-17 at 13.02.31.png, Screenshot 2023-04-17 at 13.03.24.png, Screenshot 
> 2023-04-17 at 13.03.34-1.png, Screenshot 2023-04-17 at 13.03.34.png
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31772) AsyncSinkWriter Performance regression due to AIMD rate limiting strategy

2023-04-17 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31772:

Attachment: Screenshot 2023-04-17 at 13.03.34-1.png

> AsyncSinkWriter Performance regression due to AIMD rate limiting strategy
> -
>
> Key: FLINK-31772
> URL: https://issues.apache.org/jira/browse/FLINK-31772
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, 
> aws-connector-4.0.0, aws-connector-4.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1, aws-connector-4.2.0
>
> Attachments: Screenshot 2023-04-11 at 12.56.10.png, Screenshot 
> 2023-04-11 at 12.58.09.png, Screenshot 2023-04-11 at 13.01.47.png, Screenshot 
> 2023-04-17 at 13.02.31.png, Screenshot 2023-04-17 at 13.03.24.png, Screenshot 
> 2023-04-17 at 13.03.34-1.png, Screenshot 2023-04-17 at 13.03.34.png
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31772) AsyncSinkWriter Performance regression due to AIMD rate limiting strategy

2023-04-17 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31772:

Attachment: Screenshot 2023-04-17 at 13.02.31.png
Screenshot 2023-04-17 at 13.03.24.png
Screenshot 2023-04-17 at 13.03.34.png

> AsyncSinkWriter Performance regression due to AIMD rate limiting strategy
> -
>
> Key: FLINK-31772
> URL: https://issues.apache.org/jira/browse/FLINK-31772
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, 
> aws-connector-4.0.0, aws-connector-4.1.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1, aws-connector-4.2.0
>
> Attachments: Screenshot 2023-04-11 at 12.56.10.png, Screenshot 
> 2023-04-11 at 12.58.09.png, Screenshot 2023-04-11 at 13.01.47.png, Screenshot 
> 2023-04-17 at 13.02.31.png, Screenshot 2023-04-17 at 13.03.24.png, Screenshot 
> 2023-04-17 at 13.03.34.png
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31772) AsyncSinkWriter Performance regression due to AIMD rate limiting strategy

2023-04-11 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-31772:
---

 Summary: AsyncSinkWriter Performance regression due to AIMD rate 
limiting strategy
 Key: FLINK-31772
 URL: https://issues.apache.org/jira/browse/FLINK-31772
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common, Connectors / Firehose, Connectors / 
Kinesis
Affects Versions: aws-connector-4.1.0, aws-connector-4.0.0, 1.16.1, 
aws-connector-3.0.0, 1.16.0
Reporter: Ahmed Hamdy
 Fix For: 1.16.2, aws-connector-4.2.0
 Attachments: Screenshot 2023-04-11 at 12.56.10.png, Screenshot 
2023-04-11 at 12.58.09.png, Screenshot 2023-04-11 at 13.01.47.png

h1. Issue

While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
{{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
regression against the deprecated sink for same environment setting.

Further investigation identified that the AIMD Ratelimiting strategy is the 
bottleneck for the regression. 

Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} and 
{KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}

h2. Environment Settings
- Benchmarking was performed on AWS KDA.
- Application logic is just sending records downstream
- Application parallelism was tested to be 1.
- Kinesis stream number of shards was tested with 8 and 12.
- payload size was 1Kb and 100Kb.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31772) AsyncSinkWriter Performance regression due to AIMD rate limiting strategy

2023-04-11 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-31772:

Description: 
h1. Issue

While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
{{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
regression against the deprecated sink for same environment setting.

Further investigation identified that the AIMD Ratelimiting strategy is the 
bottleneck for the regression. 

Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} and 
{KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}



h2. Environment Settings
- Benchmarking was performed on AWS KDA.
- Application logic is just sending records downstream
- Application parallelism was tested to be 1.
- Kinesis stream number of shards was tested with 8 and 12.
- payload size was 1Kb and 100Kb.

 

  was:
h1. Issue

While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
{{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
regression against the deprecated sink for same environment setting.

Further investigation identified that the AIMD Ratelimiting strategy is the 
bottleneck for the regression. 

Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} and 
{KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}

h2. Environment Settings
- Benchmarking was performed on AWS KDA.
- Application logic is just sending records downstream
- Application parallelism was tested to be 1.
- Kinesis stream number of shards was tested with 8 and 12.
- payload size was 1Kb and 100Kb.

 


> AsyncSinkWriter Performance regression due to AIMD rate limiting strategy
> -
>
> Key: FLINK-31772
> URL: https://issues.apache.org/jira/browse/FLINK-31772
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common, Connectors / Firehose, Connectors / 
> Kinesis
>Affects Versions: 1.16.0, aws-connector-3.0.0, 1.16.1, 
> aws-connector-4.0.0, aws-connector-4.1.0
>Reporter: Ahmed Hamdy
>Priority: Major
> Fix For: 1.16.2, aws-connector-4.2.0
>
> Attachments: Screenshot 2023-04-11 at 12.56.10.png, Screenshot 
> 2023-04-11 at 12.58.09.png, Screenshot 2023-04-11 at 13.01.47.png
>
>
> h1. Issue
> While benchmarking the {{KinesisStreamSink}} for 1.15 against the legacy 
> {{FlinkKinesisProduced}} , it is observed that the new sink has a performance 
> regression against the deprecated sink for same environment setting.
> Further investigation identified that the AIMD Ratelimiting strategy is the 
> bottleneck for the regression. 
> Attached results for {{KinesisStreamSink}}  against {FlinkKinesisProducer} 
> and {KinesisStreamSink} after disabling {{AIMDRatelimitingStrategy}}
> h2. Environment Settings
> - Benchmarking was performed on AWS KDA.
> - Application logic is just sending records downstream
> - Application parallelism was tested to be 1.
> - Kinesis stream number of shards was tested with 8 and 12.
> - payload size was 1Kb and 100Kb.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2023-03-15 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17700734#comment-17700734
 ] 

Ahmed Hamdy commented on FLINK-31472:
-

I agree with [~martijnvisser] suggestion!

I haven't encountered a similar issue. I will double check though and update 
the ticket if I find an issue.

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1
>Reporter: Ran Tao
>Priority: Major
>
> when run mvn clean test, this case failed occasionally.
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
>         at 
> 

[jira] [Commented] (FLINK-30643) docs_check fails

2023-01-12 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17675878#comment-17675878
 ] 

Ahmed Hamdy commented on FLINK-30643:
-

Hello This is due to migration of AWS connectors
It is tracked underFLINK-30641

> docs_check fails
> 
>
> Key: FLINK-30643
> URL: https://issues.apache.org/jira/browse/FLINK-30643
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
>
> We experience failures in the documentation checks:
> {code:java}
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:45:20":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:46:20":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:53:20":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:54:20":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:62:21":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:63:21":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:103:20":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/formats/overview.md:104:20":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/datastream/dynamodb": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/datastream/overview.md:43:22":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/datastream/kinesis": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/datastream/overview.md:44:34":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/datastream/firehose": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/datastream/overview.md:45:35":
>  page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/dynamodb": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/overview.md:70:20": 
> page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/overview.md:76:20": 
> page not found
> ERROR 2023/01/12 00:16:38 [en] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content/docs/connectors/table/overview.md:82:20": 
> page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content.zh/docs/connectors/table/formats/overview.md:45:20":
>  page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content.zh/docs/connectors/table/formats/overview.md:46:20":
>  page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content.zh/docs/connectors/table/formats/overview.md:53:20":
>  page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content.zh/docs/connectors/table/formats/overview.md:54:20":
>  page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content.zh/docs/connectors/table/formats/overview.md:62:21":
>  page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> "/home/vsts/work/1/s/docs/content.zh/docs/connectors/table/formats/overview.md:63:21":
>  page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/kinesis": 
> "/home/vsts/work/1/s/docs/content.zh/docs/connectors/table/formats/overview.md:103:20":
>  page not found
> ERROR 2023/01/12 00:16:45 [zh] REF_NOT_FOUND: Ref 
> "docs/connectors/table/firehose": 
> 

[jira] [Commented] (FLINK-27941) [JUnit5 Migration] Module: flink-connector-kinesis

2023-01-09 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17655992#comment-17655992
 ] 

Ahmed Hamdy commented on FLINK-27941:
-

[~mapohl] 

Given the connectors being moved now to 
[https://github.com/apache/flink-connector-aws] and that the PR seems to have 
been stalled for long time, would it make sense to reassign the ticket.

Please let me know if we can move this forward, happy to take it then.

> [JUnit5 Migration] Module: flink-connector-kinesis
> --
>
> Key: FLINK-27941
> URL: https://issues.apache.org/jira/browse/FLINK-27941
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kinesis
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Jun He
>Priority: Minor
>  Labels: pull-request-available, stale-assigned, starter
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29854) Make Record Size Flush Strategy Optional for Async Sink

2023-01-03 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17653981#comment-17653981
 ] 

Ahmed Hamdy commented on FLINK-29854:
-

Proposed Improvement under
https://cwiki.apache.org/confluence/display/FLINK/FLIP-284+%3A+Making+AsyncSinkWriter+Flush+triggers+adjustable
will update issue accordingly

> Make Record Size Flush Strategy Optional for Async Sink
> ---
>
> Key: FLINK-29854
> URL: https://issues.apache.org/jira/browse/FLINK-29854
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Reporter: Danny Cranmer
>Assignee: Ahmed Hamdy
>Priority: Major
>
> h3. Background
> Currently AsyncSinkWriter supports three mechanisms that trigger a flush to 
> the destination:
>  * TIme based 
>  * Batch size in bytes
>  * Number of records in the batch
> For "batch size in bytes" one must implement 
> [getSizeInBytes|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java#L202]
>  in order for the base to calculate the total batch size. In some cases 
> computing the batch size within the AsyncSinkWriter is an expensive 
> operation, or not possible. For example, the DynamoDB connector needs to 
> determine the serialized size of {{DynamoDbWriteRequest}}. 
> (https://github.com/apache/flink-connector-dynamodb/pull/1/files#r1012223894)
> h3. Scope
> Add a feature to make "size in bytes" support optional, this includes:
> - Connectors will not be required to implement {{getSizeInBytes}}
> - Batches will not be validated for max size
> - Records will not be validated for size
> - Batches are not flushed when max size is exceeded
> The sink implementer can decide if it is appropriate to enable this feature.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26088) Add Elasticsearch 8.0 support

2022-12-15 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647930#comment-17647930
 ] 

Ahmed Hamdy commented on FLINK-26088:
-

Hello [~mtfelisb], 
Thanks for the contribution, I am happy to review any ready PRs.


> Add Elasticsearch 8.0 support
> -
>
> Key: FLINK-26088
> URL: https://issues.apache.org/jira/browse/FLINK-26088
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Yuhao Bi
>Assignee: zhenzhenhua
>Priority: Major
>
> Since Elasticsearch 8.0 is officially released, I think it's time to consider 
> adding es8 connector support.
> The High Level REST Client we used for connection [is marked deprecated in es 
> 7.15.0|https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html].
>  Maybe we can migrate to use the new [Java API 
> Client|https://www.elastic.co/guide/en/elasticsearch/client/java-api-client/8.0/index.html]
>  at this time.
> Elasticsearch8.0 release note: 
> [https://www.elastic.co/guide/en/elasticsearch/reference/8.0/release-notes-8.0.0.html]
> release highlights: 
> [https://www.elastic.co/guide/en/elasticsearch/reference/8.0/release-highlights.html]
> REST API compatibility: 
> https://www.elastic.co/guide/en/elasticsearch/reference/8.0/rest-api-compatibility.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30304) Possible Deadlock in Kinesis/Firehose/DynamoDB Connector

2022-12-06 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17643966#comment-17643966
 ] 

Ahmed Hamdy commented on FLINK-30304:
-

Thanks [~dannycranmer] for reporting the issue. The root cause seems correct to 
me.

{{- I do not think we can do anything about this besides wait for the fix in 
the AWS SDK.}}

I agree,I can't find a quick workaround from the concrete writer side.

- {{One way around this is to keep track of inflight request time, and fail the 
job (or retry) upon some timeout.}}

I think this might need more investigation as it might break some existing 
implementations, unless we enable sink implementer to override but that would 
not be backward compatible, wdyt?


> Possible Deadlock in Kinesis/Firehose/DynamoDB Connector
> 
>
> Key: FLINK-30304
> URL: https://issues.apache.org/jira/browse/FLINK-30304
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / DynamoDB, Connectors / Firehose, Connectors 
> / Kinesis
>Affects Versions: 1.16.0, 1.15.3, aws-connector-3.0.0
>Reporter: Danny Cranmer
>Assignee: Danny Cranmer
>Priority: Critical
> Fix For: 1.17.0, 1.16.1, 1.15.4, aws-connector-4.0.0, 
> aws-connector-3.1.0
>
> Attachments: sink-deadlock.png
>
>
> AWS Sinks based on Async Sink can enter a deadlock situation if the AWS async 
> client throws error outside of the future. We observed this with a local 
> application:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.closedChannelMessage(NettyUtils.java:135)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.decorateException(NettyUtils.java:71)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.handleFailure(NettyRequestExecutor.java:310)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.makeRequestListener(NettyRequestExecutor.java:189)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.CancellableAcquireChannelPool.lambda$acquire$1(CancellableAcquireChannelPool.java:58)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.ensureAcquiredChannelIsHealthy(HealthCheckedChannelPool.java:114)
> at 
> org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.lambda$tryAcquire$1(HealthCheckedChannelPool.java:97)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:502)
> at 
> org.apache.flink.kinesis.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
> at 
> 

[jira] [Commented] (FLINK-29991) KinesisFirehoseSinkTest#firehoseSinkFailsWhenUnableToConnectToRemoteService failed

2022-11-11 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17632192#comment-17632192
 ] 

Ahmed Hamdy commented on FLINK-29991:
-

Hi [~martijnvisser] 
Can you assign this to me, I will have a look at it asap!

> KinesisFirehoseSinkTest#firehoseSinkFailsWhenUnableToConnectToRemoteService 
> failed 
> ---
>
> Key: FLINK-29991
> URL: https://issues.apache.org/jira/browse/FLINK-29991
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.15.2
>Reporter: Martijn Visser
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Nov 10 10:22:53 [ERROR] 
> org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkTest.firehoseSinkFailsWhenUnableToConnectToRemoteService
>   Time elapsed: 7.394 s  <<< FAILURE!
> Nov 10 10:22:53 java.lang.AssertionError: 
> Nov 10 10:22:53 
> Nov 10 10:22:53 Expecting throwable message:
> Nov 10 10:22:53   "An OperatorEvent from an OperatorCoordinator to a task was 
> lost. Triggering task failover to ensure consistency. Event: 
> '[NoMoreSplitEvent]', targetTask: Source: Sequence Source -> Map -> Map -> 
> Sink: Writer (15/32) - execution #0"
> Nov 10 10:22:53 to contain:
> Nov 10 10:22:53   "Received an UnknownHostException when attempting to 
> interact with a service."
> Nov 10 10:22:53 but did not.
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43017=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=44513



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29854) Make Record Size Flush Strategy Optional for Async Sink

2022-11-02 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17627981#comment-17627981
 ] 

Ahmed Hamdy commented on FLINK-29854:
-

Hi [~dannycranmer], could you please assign me this issue?

> Make Record Size Flush Strategy Optional for Async Sink
> ---
>
> Key: FLINK-29854
> URL: https://issues.apache.org/jira/browse/FLINK-29854
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Reporter: Danny Cranmer
>Priority: Major
>
> h3. Background
> Currently AsyncSinkWriter supports three mechanisms that trigger a flush to 
> the destination:
>  * TIme based 
>  * Batch size in bytes
>  * Number of records in the batch
> For "batch size in bytes" one must implement 
> [getSizeInBytes|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java#L202]
>  in order for the base to calculate the total batch size. In some cases 
> computing the batch size within the AsyncSinkWriter is an expensive 
> operation, or not possible. For example, the DynamoDB connector needs to 
> determine the serialized size of {{DynamoDbWriteRequest}}. 
> (https://github.com/apache/flink-connector-dynamodb/pull/1/files#r1012223894)
> h3. Scope
> Add a feature to make "size in bytes" support optional, this includes:
> - Connectors will not be required to implement {{getSizeInBytes}}
> - Batches will not be validated for max size
> - Records will not be validated size
> The sink implementer can decide if it is appropriate to enable this feature.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28332) GlueSchemaRegistryJsonKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-28332:

Component/s: Formats (JSON, Avro, Parquet, ORC, SequenceFile)

> GlueSchemaRegistryJsonKinesisITCase is being Ignored due to `Access key not 
> configured`
> ---
>
> Key: FLINK-28332
> URL: https://issues.apache.org/jira/browse/FLINK-28332
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Tests
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h1. Description
> {{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
> skipped due to {{Access key not configured}}. 
> Access Key and Secret Key should be added to test environment variables to 
> enable test.
> Currently on adding these keys to environment variables the test fails with 
> {quote}AWSSchemaRegistryException: Exception occurred while fetching or 
> registering schema definition = 
> {"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
>  schema name = gsr_json_input_stream 
>   at 
> com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
> {quote}
> These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28333) GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-28333:

Component/s: Formats (JSON, Avro, Parquet, ORC, SequenceFile)

> GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not 
> configured`
> ---
>
> Key: FLINK-28333
> URL: https://issues.apache.org/jira/browse/FLINK-28333
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Tests
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h1. Description
> {{GlueSchemaRegistryAvroKinesisITCase}} test is not being run on CI and is 
> skipped due to {{Access key not configured}}. 
> Access Key and Secret Key should be added to test environment variables to 
> enable test.
> Currently on adding these test to environment variables the test fails with 
> {quote}AWSSchemaRegistryException: Exception occurred while fetching or 
> registering schema definition = 
> {"type":"record","name":"User","namespace":"org.apache.flink.glue.schema.registry.test","fields":[{"name":"name","type":"string"},{"name":"favorite_number","type":["int","null"]},{"name":"favorite_color","type":["string","null"]}]},
>  schema name = gsr_avro_input_stream 
>   at 
> com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202){quote}
> These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28333) GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-28333:

Description: 
h1. Description
{{GlueSchemaRegistryAvroKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these keys to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"type":"record","name":"User","namespace":"org.apache.flink.glue.schema.registry.test","fields":[{"name":"name","type":"string"},{"name":"favorite_number","type":["int","null"]},{"name":"favorite_color","type":["string","null"]}]},
 schema name = gsr_avro_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202){quote}

These tests should be enabled and fixed.

  was:
h1. Description
{{GlueSchemaRegistryAvroKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these test to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"type":"record","name":"User","namespace":"org.apache.flink.glue.schema.registry.test","fields":[{"name":"name","type":"string"},{"name":"favorite_number","type":["int","null"]},{"name":"favorite_color","type":["string","null"]}]},
 schema name = gsr_avro_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202){quote}

These tests should be enabled and fixed.


> GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not 
> configured`
> ---
>
> Key: FLINK-28333
> URL: https://issues.apache.org/jira/browse/FLINK-28333
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Tests
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h1. Description
> {{GlueSchemaRegistryAvroKinesisITCase}} test is not being run on CI and is 
> skipped due to {{Access key not configured}}. 
> Access Key and Secret Key should be added to test environment variables to 
> enable test.
> Currently on adding these keys to environment variables the test fails with 
> {quote}AWSSchemaRegistryException: Exception occurred while fetching or 
> registering schema definition = 
> {"type":"record","name":"User","namespace":"org.apache.flink.glue.schema.registry.test","fields":[{"name":"name","type":"string"},{"name":"favorite_number","type":["int","null"]},{"name":"favorite_color","type":["string","null"]}]},
>  schema name = gsr_avro_input_stream 
>   at 
> com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202){quote}
> These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28332) GlueSchemaRegistryJsonKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-28332:

Description: 
h1. Description
{{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these keys to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
 schema name = gsr_json_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
{quote}

These tests should be enabled and fixed.

  was:
h1. Description
{{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these test to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
 schema name = gsr_json_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
{quote}

These tests should be enabled and fixed.


> GlueSchemaRegistryJsonKinesisITCase is being Ignored due to `Access key not 
> configured`
> ---
>
> Key: FLINK-28332
> URL: https://issues.apache.org/jira/browse/FLINK-28332
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h1. Description
> {{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
> skipped due to {{Access key not configured}}. 
> Access Key and Secret Key should be added to test environment variables to 
> enable test.
> Currently on adding these keys to environment variables the test fails with 
> {quote}AWSSchemaRegistryException: Exception occurred while fetching or 
> registering schema definition = 
> {"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
>  schema name = gsr_json_input_stream 
>   at 
> com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
> {quote}
> These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28332) GlueSchemaRegistryJsonKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-28332:

Affects Version/s: 1.15.0
   (was: 1.15.1)

> GlueSchemaRegistryJsonKinesisITCase is being Ignored due to `Access key not 
> configured`
> ---
>
> Key: FLINK-28332
> URL: https://issues.apache.org/jira/browse/FLINK-28332
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h1. Description
> {{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
> skipped due to {{Access key not configured}}. 
> Access Key and Secret Key should be added to test environment variables to 
> enable test.
> Currently on adding these test to environment variables the test fails with 
> {quote}AWSSchemaRegistryException: Exception occurred while fetching or 
> registering schema definition = 
> {"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
>  schema name = gsr_json_input_stream 
>   at 
> com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
> {quote}
> These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28333) GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-28333:

Affects Version/s: 1.15.0
   (was: 1.15.1)

> GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not 
> configured`
> ---
>
> Key: FLINK-28333
> URL: https://issues.apache.org/jira/browse/FLINK-28333
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h1. Description
> {{GlueSchemaRegistryAvroKinesisITCase}} test is not being run on CI and is 
> skipped due to {{Access key not configured}}. 
> Access Key and Secret Key should be added to test environment variables to 
> enable test.
> Currently on adding these test to environment variables the test fails with 
> {quote}AWSSchemaRegistryException: Exception occurred while fetching or 
> registering schema definition = 
> {"type":"record","name":"User","namespace":"org.apache.flink.glue.schema.registry.test","fields":[{"name":"name","type":"string"},{"name":"favorite_number","type":["int","null"]},{"name":"favorite_color","type":["string","null"]}]},
>  schema name = gsr_avro_input_stream 
>   at 
> com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202){quote}
> These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28333) GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-28333:
---

 Summary: GlueSchemaRegistryAvroKinesisITCase is being Ignored due 
to `Access key not configured`
 Key: FLINK-28333
 URL: https://issues.apache.org/jira/browse/FLINK-28333
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.15.1
Reporter: Ahmed Hamdy


h1. Description
{{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these test to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
 schema name = gsr_json_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
{quote}

These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28333) GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-28333:

Description: 
h1. Description
{{GlueSchemaRegistryAvroKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these test to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"type":"record","name":"User","namespace":"org.apache.flink.glue.schema.registry.test","fields":[{"name":"name","type":"string"},{"name":"favorite_number","type":["int","null"]},{"name":"favorite_color","type":["string","null"]}]},
 schema name = gsr_avro_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202){quote}

These tests should be enabled and fixed.

  was:
h1. Description
{{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these test to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
 schema name = gsr_json_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
{quote}

These tests should be enabled and fixed.


> GlueSchemaRegistryAvroKinesisITCase is being Ignored due to `Access key not 
> configured`
> ---
>
> Key: FLINK-28333
> URL: https://issues.apache.org/jira/browse/FLINK-28333
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.15.1
>Reporter: Ahmed Hamdy
>Priority: Major
>
> h1. Description
> {{GlueSchemaRegistryAvroKinesisITCase}} test is not being run on CI and is 
> skipped due to {{Access key not configured}}. 
> Access Key and Secret Key should be added to test environment variables to 
> enable test.
> Currently on adding these test to environment variables the test fails with 
> {quote}AWSSchemaRegistryException: Exception occurred while fetching or 
> registering schema definition = 
> {"type":"record","name":"User","namespace":"org.apache.flink.glue.schema.registry.test","fields":[{"name":"name","type":"string"},{"name":"favorite_number","type":["int","null"]},{"name":"favorite_color","type":["string","null"]}]},
>  schema name = gsr_avro_input_stream 
>   at 
> com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202){quote}
> These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-28332) GlueSchemaRegistryJsonKinesisITCase is being Ignored due to `Access key not configured`

2022-06-30 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-28332:
---

 Summary: GlueSchemaRegistryJsonKinesisITCase is being Ignored due 
to `Access key not configured`
 Key: FLINK-28332
 URL: https://issues.apache.org/jira/browse/FLINK-28332
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.15.1
Reporter: Ahmed Hamdy


h1. Description
{{GlueSchemaRegistryJsonKinesisITCase}} test is not being run on CI and is 
skipped due to {{Access key not configured}}. 
Access Key and Secret Key should be added to test environment variables to 
enable test.

Currently on adding these test to environment variables the test fails with 
{quote}AWSSchemaRegistryException: Exception occurred while fetching or 
registering schema definition = 
{"$id":"https://example.com/address.schema.json","$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"f1":{"type":"string"},"f2":{"type":"integer","maximum":1}}},
 schema name = gsr_json_input_stream 
at 
com.amazonaws.services.schemaregistry.common.AWSSchemaRegistryClient.getORRegisterSchemaVersionId(AWSSchemaRegistryClient.java:202)
{quote}

These tests should be enabled and fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28169) GlueSchemaRegistryJsonKinesisITCase fails on JDK11 due to NoSuchMethodError

2022-06-24 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558389#comment-17558389
 ] 

Ahmed Hamdy commented on FLINK-28169:
-

[~martijnvisser] 
Disabling PR available to unblock CI.
I will follow with a fix PR

> GlueSchemaRegistryJsonKinesisITCase fails on JDK11 due to NoSuchMethodError
> ---
>
> Key: FLINK-28169
> URL: https://issues.apache.org/jira/browse/FLINK-28169
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> {code:java}
> Jun 21 03:06:27 Caused by: 
> org.testcontainers.containers.ContainerLaunchException: Could not 
> create/start container
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:537)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:340)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
> Jun 21 03:06:27   ... 8 more
> Jun 21 03:06:27 Caused by: java.lang.RuntimeException: 
> java.lang.NoSuchMethodError: 
> 'org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.SdkHttpClient 
> org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createHttpClient()'
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.timeouts.Timeouts.callFuture(Timeouts.java:68)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.timeouts.Timeouts.getWithTimeout(Timeouts.java:43)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:40)
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.retryUntilSuccessRunner(KinesaliteContainer.java:150)
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.waitUntilReady(KinesaliteContainer.java:146)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:51)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:926)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:480)
> Jun 21 03:06:27   ... 10 more
> Jun 21 03:06:27 Caused by: java.lang.NoSuchMethodError: 
> 'org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.SdkHttpClient 
> org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createHttpClient()'
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.list(KinesaliteContainer.java:157)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.ratelimits.RateLimiter.getWhenReady(RateLimiter.java:51)
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.lambda$retryUntilSuccessRunner$0(KinesaliteContainer.java:153)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.unreliables.Unreliables.lambda$retryUntilSuccess$0(Unreliables.java:43)
> Jun 21 03:06:27   at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> Jun 21 03:06:27   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> Jun 21 03:06:27   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> Jun 21 03:06:27   ... 1 more
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36979=logs=6e8542d7-de38-5a33-4aca-458d6c87066d=5846934b-7a4f-545b-e5b0-eb4d8bda32e1=16659



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-28169) GlueSchemaRegistryJsonKinesisITCase fails on JDK11 due to NoSuchMethodError

2022-06-23 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558203#comment-17558203
 ] 

Ahmed Hamdy commented on FLINK-28169:
-

Hello [~martijnvisser]
Can you assign the issue to me.
Additionally a FYI, I will raise a PR to disable the test to unblock the build 
pipeline and proceed with a fix.
Since this is a packaging/shading issue and non of the dependencies has an 
ongoing work we shouldn't worry about the ignored test while being fixed.

> GlueSchemaRegistryJsonKinesisITCase fails on JDK11 due to NoSuchMethodError
> ---
>
> Key: FLINK-28169
> URL: https://issues.apache.org/jira/browse/FLINK-28169
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.16.0
>Reporter: Martijn Visser
>Priority: Critical
>  Labels: test-stability
>
> {code:java}
> Jun 21 03:06:27 Caused by: 
> org.testcontainers.containers.ContainerLaunchException: Could not 
> create/start container
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:537)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:340)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
> Jun 21 03:06:27   ... 8 more
> Jun 21 03:06:27 Caused by: java.lang.RuntimeException: 
> java.lang.NoSuchMethodError: 
> 'org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.SdkHttpClient 
> org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createHttpClient()'
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.timeouts.Timeouts.callFuture(Timeouts.java:68)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.timeouts.Timeouts.getWithTimeout(Timeouts.java:43)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:40)
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.retryUntilSuccessRunner(KinesaliteContainer.java:150)
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.waitUntilReady(KinesaliteContainer.java:146)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:51)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:926)
> Jun 21 03:06:27   at 
> org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:480)
> Jun 21 03:06:27   ... 10 more
> Jun 21 03:06:27 Caused by: java.lang.NoSuchMethodError: 
> 'org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.SdkHttpClient 
> org.apache.flink.connector.aws.testutils.AWSServicesTestUtils.createHttpClient()'
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.list(KinesaliteContainer.java:157)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.ratelimits.RateLimiter.getWhenReady(RateLimiter.java:51)
> Jun 21 03:06:27   at 
> org.apache.flink.connectors.kinesis.testutils.KinesaliteContainer$ListStreamsWaitStrategy.lambda$retryUntilSuccessRunner$0(KinesaliteContainer.java:153)
> Jun 21 03:06:27   at 
> org.rnorth.ducttape.unreliables.Unreliables.lambda$retryUntilSuccess$0(Unreliables.java:43)
> Jun 21 03:06:27   at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> Jun 21 03:06:27   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> Jun 21 03:06:27   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> Jun 21 03:06:27   ... 1 more
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36979=logs=6e8542d7-de38-5a33-4aca-458d6c87066d=5846934b-7a4f-545b-e5b0-eb4d8bda32e1=16659



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-26721) PulsarSourceITCase.testSavepoint failed on azure pipeline

2022-06-08 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551481#comment-17551481
 ] 

Ahmed Hamdy commented on FLINK-26721:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36391=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203

> PulsarSourceITCase.testSavepoint failed on azure pipeline
> -
>
> Key: FLINK-26721
> URL: https://issues.apache.org/jira/browse/FLINK-26721
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.16.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: build-stability
>
> {code:java}
> Mar 18 05:49:52 [ERROR] Tests run: 12, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 315.581 s <<< FAILURE! - in 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase
> Mar 18 05:49:52 [ERROR] 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase.testSavepoint(TestEnvironment,
>  DataStreamSourceExternalContext, CheckpointingMode)[1]  Time elapsed: 
> 140.803 s  <<< FAILURE!
> Mar 18 05:49:52 java.lang.AssertionError: 
> Mar 18 05:49:52 
> Mar 18 05:49:52 Expecting
> Mar 18 05:49:52   
> Mar 18 05:49:52 to be completed within 2M.
> Mar 18 05:49:52 
> Mar 18 05:49:52 exception caught while trying to get the future result: 
> java.util.concurrent.TimeoutException
> Mar 18 05:49:52   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> Mar 18 05:49:52   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Mar 18 05:49:52   at 
> org.assertj.core.internal.Futures.assertSucceededWithin(Futures.java:109)
> Mar 18 05:49:52   at 
> org.assertj.core.api.AbstractCompletableFutureAssert.internalSucceedsWithin(AbstractCompletableFutureAssert.java:400)
> Mar 18 05:49:52   at 
> org.assertj.core.api.AbstractCompletableFutureAssert.succeedsWithin(AbstractCompletableFutureAssert.java:396)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.checkResultWithSemantic(SourceTestSuiteBase.java:766)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.restartFromSavepoint(SourceTestSuiteBase.java:399)
> Mar 18 05:49:52   at 
> org.apache.flink.connector.testframe.testsuites.SourceTestSuiteBase.testSavepoint(SourceTestSuiteBase.java:241)
> Mar 18 05:49:52   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Mar 18 05:49:52   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Mar 18 05:49:52   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Mar 18 05:49:52   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 18 05:49:52   at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestTemplateMethod(TimeoutExtension.java:92)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
> Mar 18 05:49:52   at 
> org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
> Mar 18 05:49:52   at 
> 

[jira] [Commented] (FLINK-26177) PulsarSourceITCase.testScaleDown fails with timeout

2022-06-07 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550897#comment-17550897
 ] 

Ahmed Hamdy commented on FLINK-26177:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36335=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203

> PulsarSourceITCase.testScaleDown fails with timeout
> ---
>
> Key: FLINK-26177
> URL: https://issues.apache.org/jira/browse/FLINK-26177
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-blocker, test-stability
>
> We observed a [build 
> failure|https://dev.azure.com/mapohl/flink/_build/results?buildId=742=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=26553]
>  caused by {{PulsarSourceITCase.testScaleDown}}:
> {code}
> Feb 15 20:56:02 [ERROR] Tests run: 16, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 431.023 s <<< FAILURE! - in 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase
> Feb 15 20:56:02 [ERROR] 
> org.apache.flink.connector.pulsar.source.PulsarSourceITCase.testScaleDown(TestEnvironment,
>  DataStreamSourceExternalContext, CheckpointingMode)[2]  Time elapsed: 
> 138.444 s  <<< FAILURE!
> Feb 15 20:56:02 java.lang.AssertionError: 
> Feb 15 20:56:02 
> Feb 15 20:56:02 Expecting
> Feb 15 20:56:02   
> Feb 15 20:56:02 to be completed within 2M.
> Feb 15 20:56:02 
> Feb 15 20:56:02 exception caught while trying to get the future result: 
> java.util.concurrent.TimeoutException
> Feb 15 20:56:02   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2022-06-05 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550245#comment-17550245
 ] 

Ahmed Hamdy edited comment on FLINK-27756 at 6/5/22 8:24 PM:
-

h2. Findings.
- Test is failing 2/100 times.
- Test has tight margin 100 ms delay + 10 ms for execution.
- Extending margin to 20 ms passes tests with failure rate of 1/1000 [max 
margin needed is 21 ms]
- Sending 1 complete batch passes tests with failure rate of 1/1000 [max margin 
needed is 14ms]
- Sending 1 complete batch with extended margin passes test with rate 100/100


was (Author: JIRAUSER280246):
h2. Findings.
- Test is failing 2/100 times.
- Test has tight margin 100 ms delay + 10 ms for execution.
- Extending margin to 20 ms passes tests with failure rate of 1/1000 [max 
margin needed is 21 ms]
- Sending 1 batch only  passes tests with failure rate of 1/1000 [max margin 
needed is 14ms]
- Sending 1 batch only with extended margin passes test with rate 100/100

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2022-06-05 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550245#comment-17550245
 ] 

Ahmed Hamdy edited comment on FLINK-27756 at 6/5/22 8:24 PM:
-

h2. Findings.
- Test is failing 2/100 times.
- Test has tight margin 100 ms delay + 10 ms for execution.
- Extending margin to 20 ms passes tests with failure rate of 1/1000 [max 
margin needed is 21 ms]
- Sending 1 batch only  passes tests with failure rate of 1/1000 [max margin 
needed is 14ms]
- Sending 1 batch only with extended margin passes test with rate 100/100


was (Author: JIRAUSER280246):
h2. Findings.
- Test is failing 2/100 times.
- Test has tight margin 100 ms delay + 10 ms for execution.
- Execution includes write executions for 2 batch sending.
- Extending margin to 20 ms passes tests with failure rate of 1/1000 [max 
margin needed is 21 ms]
- Sending 1 batch only  passes tests with failure rate of 1/1000 [max margin 
needed is 14ms]
- Sending 1 batch only with extended margin passes test with rate 100/100

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2022-06-05 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550245#comment-17550245
 ] 

Ahmed Hamdy edited comment on FLINK-27756 at 6/5/22 8:23 PM:
-

h2. Findings.
- Test is failing 2/100 times.
- Test has tight margin 100 ms delay + 10 ms for execution.
- Execution includes write executions for 2 batch sending.
- Extending margin to 20 ms passes tests with failure rate of 1/1000 [max 
margin needed is 21 ms]
- Sending 1 batch only  passes tests with failure rate of 1/1000 [max margin 
needed is 14ms]
- Sending 1 batch only with extended margin passes test with rate 100/100


was (Author: JIRAUSER280246):
h2. Findings.
- Test is failing 2/100 times.
- Test has tight margin 100 ms delay + 10 ms for execution.
- Execution includes write executions for batch sending.
- Extending margin to 20 ms passes tests with failure rate of 1/1000 [max 
margin needed is 21 ms]
- Sending 1 batch only  passes tests with failure rate of 1/1000 [max margin 
needed is 14ms]
- Sending 1 batch only with extended margin passes test with rate 100/100

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2022-06-05 Thread Ahmed Hamdy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550245#comment-17550245
 ] 

Ahmed Hamdy commented on FLINK-27756:
-

h2. Findings.
- Test is failing 2/100 times.
- Test has tight margin 100 ms delay + 10 ms for execution.
- Execution includes write executions for batch sending.
- Extending margin to 20 ms passes tests with failure rate of 1/1000 [max 
margin needed is 21 ms]
- Sending 1 batch only  passes tests with failure rate of 1/1000 [max margin 
needed is 14ms]
- Sending 1 batch only with extended margin passes test with rate 100/100

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.16.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27756) Fix intermittently failing test in `AsyncSinkWriterTest`

2022-05-24 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-27756:

Affects Version/s: (was: 1.15.0)

> Fix intermittently failing test in `AsyncSinkWriterTest`
> 
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kinesis
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> h2. Motivation
>  - One of the integration tests of `AsyncSinkWriterTest` has been reported to 
> fail intermittently on build pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27756) Fix intermittently failing test in `AsyncSinkWriterTest`

2022-05-24 Thread Ahmed Hamdy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hamdy updated FLINK-27756:

Component/s: Connectors / Common
 (was: Connectors / Kinesis)

> Fix intermittently failing test in `AsyncSinkWriterTest`
> 
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available
>
> h2. Motivation
>  - One of the integration tests of `AsyncSinkWriterTest` has been reported to 
> fail intermittently on build pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


  1   2   >