Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-15 Thread Muhammet Orazov

Hey Priya,

Having local dockerized integration testing would help devs to run the 
integration tests when they don't have AWS service credentials.


You could have a look to the DynamoDB [1] or Kinesis [2] connector 
integration tests. DynamoDBContainer extends interface from 
Testcontainers [3].


Best,
Muhammet

[1]: 
https://github.com/apache/flink-connector-aws/blob/main/flink-connector-aws/flink-connector-dynamodb/src/test/java/org/apache/flink/connector/dynamodb/sink/DynamoDbSinkITCase.java#L74-L78
[2]: 
https://github.com/apache/flink-connector-aws/blob/main/flink-connector-aws/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/FlinkKinesisITCase.java#L87-L89
[3]: 
https://github.com/apache/flink-connector-aws/blob/main/flink-connector-aws/flink-connector-dynamodb/src/test/java/org/apache/flink/connector/dynamodb/testutils/DynamoDbContainer.java



On 2024-04-15 20:31, Dhingra, Priya wrote:

Thanks Danny/ Muhammet for the feedback. Updated the FLIP accordingly.

I am not familiar with docker based integration testing but if that is 
what we have used for other AWS services, I can follow the same here. 
If you can share any existing sample code link that would be very 
helpful.


Thanks
Priya

On 4/15/24, 5:22 AM, "Danny Cranmer" > wrote:



CAUTION: This email originated from outside of the organization. Do not 
click links or open attachments unless you can confirm the sender and 
know the content is safe.







Thanks for the FLIP Priya. I have seen many custom implementations of 
SQS

sinks, so having an Apache supported connector will be great.


I agree on the points about implementation detail in the FLIP. There is
still too much code really, we will review that in the PR. The FLIP is
mainly to agree we should add this connector, we can leave the
implementation details for later.


nit: Wrt to versioning, it is actually Flink 1.18 and 1.19 now, since 
1.17
is out of support. But since we are using the flink-connector-aws repo 
we

will use the supported version there.



Would it make sense to use here docker based integration testing using

Testcontainer


+1, we typically use localstack for the Integration tests and hit AWS 
in

the e2e test packages. Having coverage of both will be great.


Thanks,
Danny


On Mon, Apr 15, 2024 at 3:23 AM Muhammet Orazov
mailto:mor+fl...@morazow.com.inva>lid> 
wrote:




Hey Priya,

Thanks for the FLIP and driving it!

One question from my side on test plan:

> End to end integration tests that hit the real Amazon SQS service.
> These tests will be enabled when credentials are defined.

Would it make sense to use here docker based integration testing using
Testcontainer[1], for example, with Localstack[2]? Seems like the SQS
service is provided on the free tier.

[1]: https://java.testcontainers.org/modules/localstack/ 

[2]: https://docs.localstack.cloud/user-guide/aws/sqs/ 



On 2024-04-05 18:16, Dhingra, Priya wrote:
> Hi Dev,
>
> I would like to start a discussion about FLIP-438: Amazon SQS Sink
> Connector<
https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector> 
.


> This FLIP is proposing to add support for AWS SQS sink in
> flink-connector-aws repo.
>
> For more details, see FLIP-438. Looking forward to your feedback.
>
https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector 


>
> Thanks,
> Priya



Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-15 Thread Dhingra, Priya
Thanks Danny/ Muhammet for the feedback. Updated the FLIP accordingly.

I am not familiar with docker based integration testing but if that is what we 
have used for other AWS services, I can follow the same here. If you can share 
any existing sample code link that would be very helpful.

Thanks
Priya

On 4/15/24, 5:22 AM, "Danny Cranmer" mailto:dannycran...@apache.org>> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.






Thanks for the FLIP Priya. I have seen many custom implementations of SQS
sinks, so having an Apache supported connector will be great.


I agree on the points about implementation detail in the FLIP. There is
still too much code really, we will review that in the PR. The FLIP is
mainly to agree we should add this connector, we can leave the
implementation details for later.


nit: Wrt to versioning, it is actually Flink 1.18 and 1.19 now, since 1.17
is out of support. But since we are using the flink-connector-aws repo we
will use the supported version there.


> Would it make sense to use here docker based integration testing using
Testcontainer


+1, we typically use localstack for the Integration tests and hit AWS in
the e2e test packages. Having coverage of both will be great.


Thanks,
Danny


On Mon, Apr 15, 2024 at 3:23 AM Muhammet Orazov
mailto:mor+fl...@morazow.com.inva>lid> wrote:


> Hey Priya,
>
> Thanks for the FLIP and driving it!
>
> One question from my side on test plan:
>
> > End to end integration tests that hit the real Amazon SQS service.
> > These tests will be enabled when credentials are defined.
>
> Would it make sense to use here docker based integration testing using
> Testcontainer[1], for example, with Localstack[2]? Seems like the SQS
> service is provided on the free tier.
>
> [1]: https://java.testcontainers.org/modules/localstack/ 
> 
> [2]: https://docs.localstack.cloud/user-guide/aws/sqs/ 
> 
>
> On 2024-04-05 18:16, Dhingra, Priya wrote:
> > Hi Dev,
> >
> > I would like to start a discussion about FLIP-438: Amazon SQS Sink
> > Connector<
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector>
>  
> .
>
> > This FLIP is proposing to add support for AWS SQS sink in
> > flink-connector-aws repo.
> >
> > For more details, see FLIP-438. Looking forward to your feedback.
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector
>  
> 
> >
> > Thanks,
> > Priya
>





Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-15 Thread Danny Cranmer
Thanks for the FLIP Priya. I have seen many custom implementations of SQS
sinks, so having an Apache supported connector will be great.

I agree on the points about implementation detail in the FLIP. There is
still too much code really, we will review that in the PR. The FLIP is
mainly to agree we should add this connector, we can leave the
implementation details for later.

nit: Wrt to versioning, it is actually Flink 1.18 and 1.19 now, since 1.17
is out of support. But since we are using the flink-connector-aws repo we
will use the supported version there.

> Would it make sense to use here docker based integration testing using
Testcontainer

+1, we typically use localstack for the Integration tests and hit AWS in
the e2e test packages. Having coverage of both will be great.

Thanks,
Danny

On Mon, Apr 15, 2024 at 3:23 AM Muhammet Orazov
 wrote:

> Hey Priya,
>
> Thanks for the FLIP and driving it!
>
> One question from my side on test plan:
>
> > End to end integration tests that hit the real Amazon SQS service.
> > These tests will be enabled when credentials are defined.
>
> Would it make sense to use here docker based integration testing using
> Testcontainer[1], for example, with Localstack[2]? Seems like the SQS
> service is provided on the free tier.
>
> [1]: https://java.testcontainers.org/modules/localstack/
> [2]: https://docs.localstack.cloud/user-guide/aws/sqs/
>
> On 2024-04-05 18:16, Dhingra, Priya wrote:
> > Hi Dev,
> >
> > I would like to start a discussion about FLIP-438: Amazon SQS Sink
> > Connector<
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector>.
>
> > This FLIP is proposing to add support for AWS SQS sink in
> > flink-connector-aws repo.
> >
> > For more details, see FLIP-438. Looking forward to your feedback.
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector
> >
> > Thanks,
> > Priya
>


Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-14 Thread Muhammet Orazov

Hey Priya,

Thanks for the FLIP and driving it!

One question from my side on test plan:

End to end integration tests that hit the real Amazon SQS service. 
These tests will be enabled when credentials are defined.


Would it make sense to use here docker based integration testing using 
Testcontainer[1], for example, with Localstack[2]? Seems like the SQS 
service is provided on the free tier.


[1]: https://java.testcontainers.org/modules/localstack/
[2]: https://docs.localstack.cloud/user-guide/aws/sqs/

On 2024-04-05 18:16, Dhingra, Priya wrote:

Hi Dev,

I would like to start a discussion about FLIP-438: Amazon SQS Sink 
Connector. 
This FLIP is proposing to add support for AWS SQS sink in 
flink-connector-aws repo.


For more details, see FLIP-438. Looking forward to your feedback.
https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector

Thanks,
Priya


Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-12 Thread Dhingra, Priya
Thanks Samrat, that sounds great!. Updated "Yes" for Table support in FLIP.

On 4/11/24, 10:18 PM, "Samrat Deb" mailto:decordea...@gmail.com>> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.






Hi priya ,


I can help you with adding table api support and add patch for it . Let me
know if that works for you to add it as part of this flip .




Thank you


Bests,
Samrat


On Fri, 12 Apr 2024 at 3:11 AM, Dhingra, Priya mailto:dhipr...@amazon.com.inva>lid>
wrote:


> Hi Ahmed,
>
> Thanks for reviewing it. I have updated the FLIP again.
>
> On your question of concerns on adding Table API support right now, the
> only reason I don't want to commit for it is that I am really not sure
> about the effort it involves and how to do/test it since I never worked
> with Table API.
>
> This SQS sink work we internally done at Amazon to support our usecase and
> since its already done and tested in our prod environment for very long. We
> intended to add this portion, at the very least, to the open source
> community so that it can be accessed by others and eventually grow as a
> community.
>
>
> Thanks
> Priya
>
> On 4/11/24, 1:52 AM, "Ahmed Hamdy"    hamdy10...@gmail.com >> wrote:
>
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
>
>
>
> Thanks Priya
> the FLIP now looks much better, I would move out some of the implementation
> details . Just highlight how the writer actually uses the client (the
> subitRequestEntries part) and how the Sink itself exposes its API to users
> (the Sink & builder part), other parts should not be part of the FLIP.
> Let's leave something for the coders ;).
>
>
> I would still vote to add Table API support, you really would have done
> most of the work already, do you believe there is a specific concern not to
> include it?
>
>
> I am happy with the FLIP once reformatted. Thanks for the effort.
>
>
>
>
> Best Regards
> Ahmed Hamdy
>
>
>
>
> On Tue, 9 Apr 2024 at 23:59, Dhingra, Priya  
> >lid>
> wrote:
>
>
> >
> >
> > On 4/9/24, 3:57 PM, "Dhingra, Priya"  >   dhipr...@amazon.com.inva >  > dhipr...@amazon.com.inva  
> > >>LID> 
> > wrote:
> >
> >
> > CAUTION: This email originated from outside of the organization. Do not
> > click links or open attachments unless you can confirm the sender and
> know
> > the content is safe.
> >
> >
> >
> >
> >
> >
> > Hi Ahmed and Samrat,
> >
> >
> > Thanks a lot for all the feedbacks, this is my first ever contribution to
> > apache Flink, hence I was bit unaware about few things but updated all of
> > them as per your suggestions, thanks again for all the support here, much
> > appreciated!!
> >
> >
> > 1) I am not sure why we need to suppress warnings in the sink example in
> > the
> > FLIP?
> >
> >
> > Removed and updated the FLIP.
> >
> >
> > 2) You provided the sink example as it is the Public Interface, however
> the
> > actual AsyncSink logic is mostly in the writer, so would be helpful to
> > provide a brief of the writer or the "submitRequestEntries"
> >
> >
> > Added in FLIP
> >
> >
> > 3) I am not sure what customDimensions are or how are they going to be
> used
> > by the client, (that's why I thought the writer example should be
> helpful).
> >
> >
> > Removed. This is no more required, we have added in our code to support
> > some specific usecase, no more required for apache PR.
> >
> >
> > 4) Are we going to use the existing aws client providers to handle the
> > authentication and async client creation similar to Kinesis/Firehose and
> > DDB. I would strongly recommend we do.
> >
> >
> > Yes
> >
> >
> > 5) Given you suggest implementing this in "flink-connector-aws" repo, it
> > should probably follow the same versioning of AWS connectors, hence
> > targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
> > given that it is out of support and 4.2 supports 1.17+.
> >
> >
> > Sorry, was not aware about the versioning we should have it here. I
> tested
> > the sqs sink with flint 1.16 so thought of putting the same, but was not
> > aware about out of support. Updated now with 4.3.0 and 1.17+
> >
> >
> > 6) Are we planning to support Table API & SQL as well? This should not be
> > much of an effort IMO so I think we should.
> >
> >
> > No, not putting that in scope for first iteration. We can take that as
> > fast follow up.
> >
> >
> > 7) It would be great if we also added an implementation of Element
> > converter given that SQS message 

Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-12 Thread Aleksandr Pilipenko
Hi Priya,

Thank you for the FLIP, SQS connector will be a great addition for AWS
connectors.
I agree with Ahmed, it would be better to leave implementation details of
the sink out of the FLIP and focus on how users of this connector will be
interacting with it, mainly available configuration options.
I am also in favour of adding Table API support, however this can be done
as a follow up.

Thanks,
Aleksandr


Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-11 Thread Samrat Deb
Hi priya ,

I can help you with adding table api support and add patch for it . Let me
know if that works for you to add it as part of this flip .


Thank you

Bests,
Samrat

On Fri, 12 Apr 2024 at 3:11 AM, Dhingra, Priya 
wrote:

> Hi Ahmed,
>
> Thanks for reviewing it. I have updated the FLIP again.
>
> On your question of concerns on adding Table API support right now, the
> only reason I don't want to commit for it is that I am really not sure
> about the effort it involves and how to do/test it since I never worked
> with Table API.
>
> This SQS sink work we internally done at Amazon to support our usecase and
> since its already done and tested in our prod environment for very long. We
> intended to add this portion, at the very least, to the open source
> community so that it can be accessed by others and eventually grow as a
> community.
>
>
> Thanks
> Priya
>
> On 4/11/24, 1:52 AM, "Ahmed Hamdy"  hamdy10...@gmail.com>> wrote:
>
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
>
>
>
> Thanks Priya
> the FLIP now looks much better, I would move out some of the implementation
> details . Just highlight how the writer actually uses the client (the
> subitRequestEntries part) and how the Sink itself exposes its API to users
> (the Sink & builder part), other parts should not be part of the FLIP.
> Let's leave something for the coders ;).
>
>
> I would still vote to add Table API support, you really would have done
> most of the work already, do you believe there is a specific concern not to
> include it?
>
>
> I am happy with the FLIP once reformatted. Thanks for the effort.
>
>
>
>
> Best Regards
> Ahmed Hamdy
>
>
>
>
> On Tue, 9 Apr 2024 at 23:59, Dhingra, Priya  lid>
> wrote:
>
>
> >
> >
> > On 4/9/24, 3:57 PM, "Dhingra, Priya"  dhipr...@amazon.com.inva>  > dhipr...@amazon.com.inva >LID> wrote:
> >
> >
> > CAUTION: This email originated from outside of the organization. Do not
> > click links or open attachments unless you can confirm the sender and
> know
> > the content is safe.
> >
> >
> >
> >
> >
> >
> > Hi Ahmed and Samrat,
> >
> >
> > Thanks a lot for all the feedbacks, this is my first ever contribution to
> > apache Flink, hence I was bit unaware about few things but updated all of
> > them as per your suggestions, thanks again for all the support here, much
> > appreciated!!
> >
> >
> > 1) I am not sure why we need to suppress warnings in the sink example in
> > the
> > FLIP?
> >
> >
> > Removed and updated the FLIP.
> >
> >
> > 2) You provided the sink example as it is the Public Interface, however
> the
> > actual AsyncSink logic is mostly in the writer, so would be helpful to
> > provide a brief of the writer or the "submitRequestEntries"
> >
> >
> > Added in FLIP
> >
> >
> > 3) I am not sure what customDimensions are or how are they going to be
> used
> > by the client, (that's why I thought the writer example should be
> helpful).
> >
> >
> > Removed. This is no more required, we have added in our code to support
> > some specific usecase, no more required for apache PR.
> >
> >
> > 4) Are we going to use the existing aws client providers to handle the
> > authentication and async client creation similar to Kinesis/Firehose and
> > DDB. I would strongly recommend we do.
> >
> >
> > Yes
> >
> >
> > 5) Given you suggest implementing this in "flink-connector-aws" repo, it
> > should probably follow the same versioning of AWS connectors, hence
> > targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
> > given that it is out of support and 4.2 supports 1.17+.
> >
> >
> > Sorry, was not aware about the versioning we should have it here. I
> tested
> > the sqs sink with flint 1.16 so thought of putting the same, but was not
> > aware about out of support. Updated now with 4.3.0 and 1.17+
> >
> >
> > 6) Are we planning to support Table API & SQL as well? This should not be
> > much of an effort IMO so I think we should.
> >
> >
> > No, not putting that in scope for first iteration. We can take that as
> > fast follow up.
> >
> >
> > 7) It would be great if we also added an implementation of Element
> > converter given that SQS message bodies are mainly Strings if I remember
> > correctly. We can extend it to other types for MessageAttributeValue
> > augmentation,this should be more valuable on table API as well to use it
> as
> > default Element Converter.
> >
> >
> > Updated in FLIP
> >
> >
> > 8. Different connectors provide different types of fault
> > tolerant guarantees[1]. What type of fault tolerant sink guarantees
> > flink-connector-sqs will provide ?
> > Could you elaborate on the fault-tolerant capabilities that the
> > flink-connector-sqs will provide?
> >
> >
> > at-least-once
> >
> >
> >
> >
> > 9) Can you help with what the minimal configuration required for
> 

Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-11 Thread Dhingra, Priya
Hi Ahmed,

Thanks for reviewing it. I have updated the FLIP again.

On your question of concerns on adding Table API support right now, the only 
reason I don't want to commit for it is that I am really not sure about the 
effort it involves and how to do/test it since I never worked with Table API.

This SQS sink work we internally done at Amazon to support our usecase and 
since its already done and tested in our prod environment for very long. We 
intended to add this portion, at the very least, to the open source community 
so that it can be accessed by others and eventually grow as a community.


Thanks
Priya

On 4/11/24, 1:52 AM, "Ahmed Hamdy" mailto:hamdy10...@gmail.com>> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.






Thanks Priya
the FLIP now looks much better, I would move out some of the implementation
details . Just highlight how the writer actually uses the client (the
subitRequestEntries part) and how the Sink itself exposes its API to users
(the Sink & builder part), other parts should not be part of the FLIP.
Let's leave something for the coders ;).


I would still vote to add Table API support, you really would have done
most of the work already, do you believe there is a specific concern not to
include it?


I am happy with the FLIP once reformatted. Thanks for the effort.




Best Regards
Ahmed Hamdy




On Tue, 9 Apr 2024 at 23:59, Dhingra, Priya mailto:dhipr...@amazon.com.inva>lid>
wrote:


>
>
> On 4/9/24, 3:57 PM, "Dhingra, Priya"    dhipr...@amazon.com.inva >LID> wrote:
>
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
>
>
>
> Hi Ahmed and Samrat,
>
>
> Thanks a lot for all the feedbacks, this is my first ever contribution to
> apache Flink, hence I was bit unaware about few things but updated all of
> them as per your suggestions, thanks again for all the support here, much
> appreciated!!
>
>
> 1) I am not sure why we need to suppress warnings in the sink example in
> the
> FLIP?
>
>
> Removed and updated the FLIP.
>
>
> 2) You provided the sink example as it is the Public Interface, however the
> actual AsyncSink logic is mostly in the writer, so would be helpful to
> provide a brief of the writer or the "submitRequestEntries"
>
>
> Added in FLIP
>
>
> 3) I am not sure what customDimensions are or how are they going to be used
> by the client, (that's why I thought the writer example should be helpful).
>
>
> Removed. This is no more required, we have added in our code to support
> some specific usecase, no more required for apache PR.
>
>
> 4) Are we going to use the existing aws client providers to handle the
> authentication and async client creation similar to Kinesis/Firehose and
> DDB. I would strongly recommend we do.
>
>
> Yes
>
>
> 5) Given you suggest implementing this in "flink-connector-aws" repo, it
> should probably follow the same versioning of AWS connectors, hence
> targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
> given that it is out of support and 4.2 supports 1.17+.
>
>
> Sorry, was not aware about the versioning we should have it here. I tested
> the sqs sink with flint 1.16 so thought of putting the same, but was not
> aware about out of support. Updated now with 4.3.0 and 1.17+
>
>
> 6) Are we planning to support Table API & SQL as well? This should not be
> much of an effort IMO so I think we should.
>
>
> No, not putting that in scope for first iteration. We can take that as
> fast follow up.
>
>
> 7) It would be great if we also added an implementation of Element
> converter given that SQS message bodies are mainly Strings if I remember
> correctly. We can extend it to other types for MessageAttributeValue
> augmentation,this should be more valuable on table API as well to use it as
> default Element Converter.
>
>
> Updated in FLIP
>
>
> 8. Different connectors provide different types of fault
> tolerant guarantees[1]. What type of fault tolerant sink guarantees
> flink-connector-sqs will provide ?
> Could you elaborate on the fault-tolerant capabilities that the
> flink-connector-sqs will provide?
>
>
> at-least-once
>
>
>
>
> 9) Can you help with what the minimal configuration required for
> instantiating the sink ?
>
>
> SQSSink.builder()
> .setSqsUrl(sqsUrl)
> .setSqsClientProperties(getSQSClientProperties())
> .setSerializationSchema(serializationSchema)
> .build();
>
>
>
>
> 10) Amazon SQS offers various data types [2]. Could you outline the types
> of SQS data the sink plans to support?
>
> SendMessageBatchRequestEntry
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hi Priya,
>
>
>
>
> Thank you for the FLIP. sqs connector would be a great addition to the
> flink connector aws.
>
>
>
>
> +1 for all the 

Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-11 Thread Ahmed Hamdy
Thanks Priya
the FLIP now looks much better, I would move out some of the implementation
details . Just highlight how the writer actually uses the client (the
subitRequestEntries part) and how the Sink itself exposes its API to users
(the Sink & builder part), other parts should not be part of the FLIP.
Let's leave something for the coders ;).

I would still vote to add Table API support, you really would have done
most of the work already, do you believe there is a specific concern not to
include it?

I am happy with the FLIP once reformatted. Thanks for the effort.


Best Regards
Ahmed Hamdy


On Tue, 9 Apr 2024 at 23:59, Dhingra, Priya 
wrote:

>
>
> On 4/9/24, 3:57 PM, "Dhingra, Priya"  dhipr...@amazon.com.inva>LID> wrote:
>
>
> CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you can confirm the sender and know
> the content is safe.
>
>
>
>
>
>
> Hi Ahmed and Samrat,
>
>
> Thanks a lot for all the feedbacks, this is my first ever contribution to
> apache Flink, hence I was bit unaware about few things but updated all of
> them as per your suggestions, thanks again for all the support here, much
> appreciated!!
>
>
> 1) I am not sure why we need to suppress warnings in the sink example in
> the
> FLIP?
>
>
> Removed and updated the FLIP.
>
>
> 2) You provided the sink example as it is the Public Interface, however the
> actual AsyncSink logic is mostly in the writer, so would be helpful to
> provide a brief of the writer or the "submitRequestEntries"
>
>
> Added in FLIP
>
>
> 3) I am not sure what customDimensions are or how are they going to be used
> by the client, (that's why I thought the writer example should be helpful).
>
>
> Removed. This is no more required, we have added in our code to support
> some specific usecase, no more required for apache PR.
>
>
> 4) Are we going to use the existing aws client providers to handle the
> authentication and async client creation similar to Kinesis/Firehose and
> DDB. I would strongly recommend we do.
>
>
> Yes
>
>
> 5) Given you suggest implementing this in "flink-connector-aws" repo, it
> should probably follow the same versioning of AWS connectors, hence
> targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
> given that it is out of support and 4.2 supports 1.17+.
>
>
> Sorry, was not aware about the versioning we should have it here. I tested
> the sqs sink with flint 1.16 so thought of putting the same, but was not
> aware about out of support. Updated now with 4.3.0 and 1.17+
>
>
> 6) Are we planning to support Table API & SQL as well? This should not be
> much of an effort IMO so I think we should.
>
>
> No, not putting that in scope for first iteration. We can take that as
> fast follow up.
>
>
> 7) It would be great if we also added an implementation of Element
> converter given that SQS message bodies are mainly Strings if I remember
> correctly. We can extend it to other types for MessageAttributeValue
> augmentation,this should be more valuable on table API as well to use it as
> default Element Converter.
>
>
> Updated in FLIP
>
>
> 8. Different connectors provide different types of fault
> tolerant guarantees[1]. What type of fault tolerant sink guarantees
> flink-connector-sqs will provide ?
> Could you elaborate on the fault-tolerant capabilities that the
> flink-connector-sqs will provide?
>
>
>  at-least-once
>
>
>
>
> 9) Can you help with what the minimal configuration required for
> instantiating the sink ?
>
>
> SQSSink.builder()
> .setSqsUrl(sqsUrl)
> .setSqsClientProperties(getSQSClientProperties())
> .setSerializationSchema(serializationSchema)
> .build();
>
>
>
>
> 10) Amazon SQS offers various data types [2]. Could you outline the types
> of SQS data the sink plans to support?
>
> SendMessageBatchRequestEntry
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Hi Priya,
>
>
>
>
> Thank you for the FLIP. sqs connector would be a great addition to the
> flink connector aws.
>
>
>
>
> +1 for all the queries raised by Ahmed.
>
>
>
>
> Adding to Ahmed's queries, I have a few more:
>
>
>
>
> 1. Different connectors provide different types of fault
> tolerant guarantees[1]. What type of fault tolerant sink guarantees
> flink-connector-sqs will provide ?
> Could you elaborate on the fault-tolerant capabilities that the
> flink-connector-sqs will provide?
>
>
>
>
> 2. Can you help with what the minimal configuration required for
> instantiating the sink ?
>
>
>
>
> 3. Amazon SQS offers various data types [2]. Could you outline the types of
> SQS data the sink plans to support?
>
>
>
>
> [1]
>
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/guarantees/
> <
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/guarantees/>
> <
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/guarantees/>
> <
> 

Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-09 Thread Dhingra, Priya


On 4/9/24, 3:57 PM, "Dhingra, Priya" mailto:dhipr...@amazon.com.inva>LID> wrote:


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you can confirm the sender and know the 
content is safe.






Hi Ahmed and Samrat,


Thanks a lot for all the feedbacks, this is my first ever contribution to 
apache Flink, hence I was bit unaware about few things but updated all of them 
as per your suggestions, thanks again for all the support here, much 
appreciated!!


1) I am not sure why we need to suppress warnings in the sink example in the
FLIP?


Removed and updated the FLIP.


2) You provided the sink example as it is the Public Interface, however the
actual AsyncSink logic is mostly in the writer, so would be helpful to
provide a brief of the writer or the "submitRequestEntries"


Added in FLIP


3) I am not sure what customDimensions are or how are they going to be used
by the client, (that's why I thought the writer example should be helpful).


Removed. This is no more required, we have added in our code to support some 
specific usecase, no more required for apache PR.


4) Are we going to use the existing aws client providers to handle the
authentication and async client creation similar to Kinesis/Firehose and
DDB. I would strongly recommend we do.


Yes


5) Given you suggest implementing this in "flink-connector-aws" repo, it
should probably follow the same versioning of AWS connectors, hence
targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
given that it is out of support and 4.2 supports 1.17+.


Sorry, was not aware about the versioning we should have it here. I tested the 
sqs sink with flint 1.16 so thought of putting the same, but was not aware 
about out of support. Updated now with 4.3.0 and 1.17+


6) Are we planning to support Table API & SQL as well? This should not be
much of an effort IMO so I think we should.


No, not putting that in scope for first iteration. We can take that as fast 
follow up.


7) It would be great if we also added an implementation of Element converter 
given that SQS message bodies are mainly Strings if I remember correctly. We 
can extend it to other types for MessageAttributeValue augmentation,this should 
be more valuable on table API as well to use it as default Element Converter.


Updated in FLIP


8. Different connectors provide different types of fault
tolerant guarantees[1]. What type of fault tolerant sink guarantees
flink-connector-sqs will provide ?
Could you elaborate on the fault-tolerant capabilities that the
flink-connector-sqs will provide?


 at-least-once




9) Can you help with what the minimal configuration required for
instantiating the sink ?


SQSSink.builder()
.setSqsUrl(sqsUrl)
.setSqsClientProperties(getSQSClientProperties())
.setSerializationSchema(serializationSchema)
.build();




10) Amazon SQS offers various data types [2]. Could you outline the types of 
SQS data the sink plans to support?

SendMessageBatchRequestEntry














Hi Priya,




Thank you for the FLIP. sqs connector would be a great addition to the
flink connector aws.




+1 for all the queries raised by Ahmed.




Adding to Ahmed's queries, I have a few more:




1. Different connectors provide different types of fault
tolerant guarantees[1]. What type of fault tolerant sink guarantees
flink-connector-sqs will provide ?
Could you elaborate on the fault-tolerant capabilities that the
flink-connector-sqs will provide?




2. Can you help with what the minimal configuration required for
instantiating the sink ?




3. Amazon SQS offers various data types [2]. Could you outline the types of
SQS data the sink plans to support?




[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/guarantees/
 

 

 

[2]
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_Types.html
 

 

 





Bests,
Samrat




On Sat, Apr 6, 2024 at 6:27 PM Ahmed Hamdy mailto:hamdy10...@gmail.com> >> wrote:




> Hi Dhingra, thanks for raising the FLIP
> I am in favour of this addition in general. I have a couple of
> comments/questions on the FLIP.
> 
> - I am not sure why we need to suppress warnings in the sink example in the
> FLIP?
> - You provided the sink example as it is the Public Interface, however the
> actual AsyncSink logic is mostly in the writer, so 

Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-09 Thread Dhingra, Priya
 Hi Ahmed and Samrat, 

Thanks a lot for all the feedbacks, this is my first ever contribution to 
apache Flink, hence I was bit unaware about few things but updated all of them 
as per your suggestions, thanks again for all the support here, much 
appreciated!!

1)  I am not sure why we need to suppress warnings in the sink example in the
FLIP?
> Removed and updated the FLIP.
 
2)  You provided the sink example as it is the Public Interface, however the
actual AsyncSink logic is mostly in the writer, so would be helpful to
provide a brief of the writer or the "submitRequestEntries"
> Added in FLIP
 
3)  I am not sure what customDimensions are or how are they going to be used
by the client, (that's why I thought the writer example should be helpful).
>Removed. This is no more required, we have added in our code to support some 
>specific usecase, no more required for apache PR.
 
4)  Are we going to use the existing aws client providers to handle the
 authentication and async client creation similar to Kinesis/Firehose and
 DDB. I would strongly recommend we do.
> Yes
 
5) Given you suggest implementing this in "flink-connector-aws" repo, it
should probably follow the same versioning of AWS connectors, hence
 targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
given that it is out of support and 4.2 supports 1.17+.
> Sorry, was not aware about the versioning we should have it here. I tested 
> the sqs sink with flint 1.16 so thought of putting the same, but was not 
> aware about out of support. Updated now with 4.3.0 and 1.17+

6) Are we planning to support Table API & SQL as well? This should not be
 much of an effort IMO so I think we should.
> No, not putting that in scope for first iteration. We can take that as fast 
> follow up.

7) It would be great if we also added an implementation of Element converter 
given that SQS message bodies are mainly Strings if I remember correctly. We 
can extend it to other types for MessageAttributeValue augmentation,this should 
be more valuable on table API as well to use it as default Element Converter.
> Updated in FLIP
 
8. Different connectors provide different types of fault
tolerant guarantees[1]. What type of fault tolerant sink guarantees
flink-connector-sqs will provide ?
Could you elaborate on the fault-tolerant capabilities that the
flink-connector-sqs will provide?
> at-least-once
 
 
9) Can you help with what the minimal configuration required for
instantiating the sink ?
 > SQSSink.builder()
.setSqsUrl(sqsUrl)
.setSqsClientProperties(getSQSClientProperties())
.setSerializationSchema(serializationSchema)
.build();

 
10) Amazon SQS offers various data types [2]. Could you outline the types of 
SQS data the sink plans to support?
    •   SendMessageBatchRequestEntry

 
 
 
 
 
 
Hi Priya,
 
 
Thank you for the FLIP. sqs connector would be a great addition to the
flink connector aws.
 
 
+1 for all the queries raised by Ahmed.
 
 
Adding to Ahmed's queries, I have a few more:
 
 
1. Different connectors provide different types of fault
tolerant guarantees[1]. What type of fault tolerant sink guarantees
flink-connector-sqs will provide ?
Could you elaborate on the fault-tolerant capabilities that the
flink-connector-sqs will provide?
 
 
2. Can you help with what the minimal configuration required for
instantiating the sink ?
 
 
3. Amazon SQS offers various data types [2]. Could you outline the types of
SQS data the sink plans to support?
 
 
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/guarantees/
 

[2]
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_Types.html
 

 
 
Bests,
Samrat
 
 
On Sat, Apr 6, 2024 at 6:27 PM Ahmed Hamdy mailto:hamdy10...@gmail.com>> wrote:
 
 
> Hi Dhingra, thanks for raising the FLIP
> I am in favour of this addition in general. I have a couple of
> comments/questions on the FLIP.
> 
> - I am not sure why we need to suppress warnings in the sink example in the
> FLIP?
> - You provided the sink example as it is the Public Interface, however the
> actual AsyncSink logic is mostly in the writer, so would be helpful to
> provide a brief of the writer or the "submitRequestEntries"
> - I am not sure what customDimensions are or how are they going to be used
> by the client, (that's why I thought the writer example should be helpful).
> - Are we going to use the existing aws client providers to handle the
> authentication and async client creation similar to Kinesis/Firehose and
> DDB. I would strongly recommend we do.
> - Given you suggest implementing this in "flink-connector-aws" repo, it
> should probably follow the same versioning of AWS connectors, hence
> targeting 4.3.0. Also I am not sure why we are 

Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-06 Thread Samrat Deb
Hi Priya,

Thank you for the FLIP. sqs connector would be a great addition to the
flink connector aws.

+1 for all the queries raised by Ahmed.

Adding to Ahmed's queries, I have a few more:

1. Different connectors provide different types of fault
tolerant guarantees[1]. What type of fault tolerant sink guarantees
flink-connector-sqs will provide ?
Could you elaborate on the fault-tolerant capabilities that the
flink-connector-sqs will provide?

2. Can you help with what the minimal configuration required for
instantiating the sink ?

3. Amazon SQS offers various data types [2]. Could you outline the types of
SQS data the sink plans to support?

[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/guarantees/
[2]
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_Types.html

Bests,
Samrat

On Sat, Apr 6, 2024 at 6:27 PM Ahmed Hamdy  wrote:

> Hi Dhingra, thanks for raising the FLIP
> I am in favour of this addition in general. I have a couple of
> comments/questions on the FLIP.
>
> - I am not sure why we need to suppress warnings in the sink example in the
> FLIP?
> - You provided the sink example as it is the Public Interface, however the
> actual AsyncSink logic is mostly in the writer, so would be helpful to
> provide a brief of the writer or the "submitRequestEntries"
> - I am not sure what customDimensions are or how are they going to be used
> by the client, (that's why I thought the writer example should be helpful).
> - Are we going to use the existing aws client providers to handle the
> authentication and async client creation similar to Kinesis/Firehose and
> DDB. I would strongly recommend we do.
> - Given you suggest implementing this in "flink-connector-aws" repo, it
> should probably follow the same versioning of AWS connectors, hence
> targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
> given that it is out of support and  4.2 supports 1.17+.
> - Are we planning to support Table API & SQL as well? This should not be
> much of an effort IMO so I think we should.
> - It would be great if we also added an implementation of Element converter
> given that SQS message bodies are mainly Strings if I remember correctly.
> We can extend it to other types for MessageAttributeValue augmentation,
> this should be more valuable on table API as well to use it as default
> Element Converter.
>
> I would love to assist with the implementation and reviews if this FLIP was
> accepted.
> Best Regards
> Ahmed Hamdy
>
>
> On Fri, 5 Apr 2024 at 19:19, Dhingra, Priya 
> wrote:
>
> > Hi Dev,
> >
> > I would like to start a discussion about FLIP-438: Amazon SQS Sink
> > Connector<
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector
> >.
> > This FLIP is proposing to add support for AWS SQS sink in
> > flink-connector-aws repo.
> >
> > For more details, see FLIP-438. Looking forward to your feedback.
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector
> >
> > Thanks,
> > Priya
> >
>


Re: [DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-06 Thread Ahmed Hamdy
Hi Dhingra, thanks for raising the FLIP
I am in favour of this addition in general. I have a couple of
comments/questions on the FLIP.

- I am not sure why we need to suppress warnings in the sink example in the
FLIP?
- You provided the sink example as it is the Public Interface, however the
actual AsyncSink logic is mostly in the writer, so would be helpful to
provide a brief of the writer or the "submitRequestEntries"
- I am not sure what customDimensions are or how are they going to be used
by the client, (that's why I thought the writer example should be helpful).
- Are we going to use the existing aws client providers to handle the
authentication and async client creation similar to Kinesis/Firehose and
DDB. I would strongly recommend we do.
- Given you suggest implementing this in "flink-connector-aws" repo, it
should probably follow the same versioning of AWS connectors, hence
targeting 4.3.0. Also I am not sure why we are targeting support for 1.16
given that it is out of support and  4.2 supports 1.17+.
- Are we planning to support Table API & SQL as well? This should not be
much of an effort IMO so I think we should.
- It would be great if we also added an implementation of Element converter
given that SQS message bodies are mainly Strings if I remember correctly.
We can extend it to other types for MessageAttributeValue augmentation,
this should be more valuable on table API as well to use it as default
Element Converter.

I would love to assist with the implementation and reviews if this FLIP was
accepted.
Best Regards
Ahmed Hamdy


On Fri, 5 Apr 2024 at 19:19, Dhingra, Priya 
wrote:

> Hi Dev,
>
> I would like to start a discussion about FLIP-438: Amazon SQS Sink
> Connector<
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector>.
> This FLIP is proposing to add support for AWS SQS sink in
> flink-connector-aws repo.
>
> For more details, see FLIP-438. Looking forward to your feedback.
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector
>
> Thanks,
> Priya
>


[DISCUSS] FLIP-438: Amazon SQS Sink Connector

2024-04-05 Thread Dhingra, Priya
Hi Dev,

I would like to start a discussion about FLIP-438: Amazon SQS Sink 
Connector.
 This FLIP is proposing to add support for AWS SQS sink in flink-connector-aws 
repo.

For more details, see FLIP-438. Looking forward to your feedback.
https://cwiki.apache.org/confluence/display/FLINK/FLIP-438%3A+Amazon+SQS+Sink+Connector

Thanks,
Priya