[jira] [Commented] (FLINK-27942) [JUnit5 Migration] Module: flink-connector-rabbitmq

2022-06-08 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551610#comment-17551610
 ] 

Alexander Preuss commented on FLINK-27942:
--

Hi [~Sergey Nuyanzin], feel free to assign yourself on any of these :)

> [JUnit5 Migration] Module: flink-connector-rabbitmq
> ---
>
> Key: FLINK-27942
> URL: https://issues.apache.org/jira/browse/FLINK-27942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27942) [JUnit5 Migration] Module: flink-connector-rabbitmq

2022-06-08 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-27942:


Assignee: Sergey Nuyanzin

> [JUnit5 Migration] Module: flink-connector-rabbitmq
> ---
>
> Key: FLINK-27942
> URL: https://issues.apache.org/jira/browse/FLINK-27942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Sergey Nuyanzin
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27941) [JUnit5 Migration] Module: flink-connector-kinesis

2022-06-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27941:


 Summary: [JUnit5 Migration] Module: flink-connector-kinesis
 Key: FLINK-27941
 URL: https://issues.apache.org/jira/browse/FLINK-27941
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kinesis
Affects Versions: 1.16.0
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27942) [JUnit5 Migration] Module: flink-connector-rabbitmq

2022-06-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27942:


 Summary: [JUnit5 Migration] Module: flink-connector-rabbitmq
 Key: FLINK-27942
 URL: https://issues.apache.org/jira/browse/FLINK-27942
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors/ RabbitMQ
Affects Versions: 1.16.0
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27940) [JUnit5 Migration] Module: flink-connector-jdbc

2022-06-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27940:


 Summary: [JUnit5 Migration] Module: flink-connector-jdbc
 Key: FLINK-27940
 URL: https://issues.apache.org/jira/browse/FLINK-27940
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Affects Versions: 1.16.0
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27939) [JUnit5 Migration] Module: flink-connector-hive

2022-06-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27939:


 Summary: [JUnit5 Migration] Module: flink-connector-hive
 Key: FLINK-27939
 URL: https://issues.apache.org/jira/browse/FLINK-27939
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Affects Versions: 1.16.0
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27938) [JUnit5 Migration] Module: flink-connector-hbase

2022-06-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27938:


 Summary: [JUnit5 Migration] Module: flink-connector-hbase
 Key: FLINK-27938
 URL: https://issues.apache.org/jira/browse/FLINK-27938
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / HBase
Affects Versions: 1.16.0
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27937) [JUnit5 Migration] Module: flink-connector-gcp-pubsub

2022-06-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27937:


 Summary: [JUnit5 Migration] Module: flink-connector-gcp-pubsub
 Key: FLINK-27937
 URL: https://issues.apache.org/jira/browse/FLINK-27937
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Google Cloud PubSub
Affects Versions: 1.16.0
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27936) [JUnit5 Migration] Module: flink-connector-cassandra

2022-06-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27936:


 Summary: [JUnit5 Migration] Module: flink-connector-cassandra
 Key: FLINK-27936
 URL: https://issues.apache.org/jira/browse/FLINK-27936
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Cassandra
Affects Versions: 1.16.0
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27880) CI runs keep getting stuck

2022-06-02 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27880:


 Summary: CI runs keep getting stuck
 Key: FLINK-27880
 URL: https://issues.apache.org/jira/browse/FLINK-27880
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Alexander Preuss


I am observing multiple fails where the CI seems to be getting stuck and fails 
because of running into the timeout. 

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36259=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba=8217]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36209=logs=a57e0635-3fad-5b08-57c7-a4142d7d6fa9=2ef0effc-1da1-50e5-c2bd-aab434b1c5b7=8153]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36189=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=24c3384f-1bcb-57b3-224f-51bf973bbee8=9486]

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27786) Connector-hive should not depend on `flink-table-planner`

2022-06-01 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544886#comment-17544886
 ] 

Alexander Preuss commented on FLINK-27786:
--

Following is a list of classes that prevent removing the `flink-table-planner` 
test dependency in favor or `flink-table-test-utils` right now:
 * org.apache.flink.table.planner.runtime.utils.BatchAbstractTestBase
 * org.apache.flink.table.planner.utils.TableTestUtil
 * org.apache.flink.table.planner.factories.utils.TestCollectionTableFactory
 * org.apache.flink.table.planner.runtime.utils.BatchTestBase
 * org.apache.flink.table.planner.runtime.utils.TestingRetractSink
 * org.apache.flink.table.planner.utils.TableTestBase
 * org.apache.flink.table.planner.factories.TestValuesTableFactory
 * org.apache.flink.table.planner.runtime.stream.sql.CompactionITCaseBase
 *  

> Connector-hive should not depend on `flink-table-planner`
> -
>
> Key: FLINK-27786
> URL: https://issues.apache.org/jira/browse/FLINK-27786
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: Alexander Preuss
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27779) Connectors should not depend on `flink-table-planner`

2022-06-01 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-27779:


Assignee: (was: Alexander Preuss)

> Connectors should not depend on `flink-table-planner`
> -
>
> Key: FLINK-27779
> URL: https://issues.apache.org/jira/browse/FLINK-27779
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Common
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Minor
>  Labels: Connector, pull-request-available
>
> Connector modules currently rely heavily on `flink-table-planner` as a test 
> dependency for testing the ITCases with 'DynamicTableX' using the 
> TableFactory to load the respective connector. 
> There is now a better way that only requires to have `flink-table-test-utils` 
> as a test dependency. Therefore all connectors should be migrated to using 
> the new way.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27784) Connector-jdbc should not depend on `flink-table-planner`

2022-06-01 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544884#comment-17544884
 ] 

Alexander Preuss commented on FLINK-27784:
--

Following is a list of classes that prevent removing the `flink-table-planner` 
test dependency in favor or `flink-table-test-utils` right now:


 * org.apache.flink.table.planner.factories.TestValuesTableFactory
 * org.apache.flink.table.planner.runtime.utils.TestData
 * org.apache.flink.table.planner.utils.StreamTableTestUtil
 * org.apache.flink.table.planner.utils.TableTestBase
 * org.apache.flink.table.planner.runtime.utils.StreamTestSink

> Connector-jdbc should not depend on `flink-table-planner`
> -
>
> Key: FLINK-27784
> URL: https://issues.apache.org/jira/browse/FLINK-27784
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Alexander Preuss
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27785) Connector-hbase should not depend on `flink-table-planner`

2022-06-01 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544885#comment-17544885
 ] 

Alexander Preuss commented on FLINK-27785:
--

Following is a list of classes that prevent removing the `flink-table-planner` 
test dependency in favor or `flink-table-test-utils` right now:


 * org.apache.flink.table.planner.utils.StreamTableTestUtil
 * org.apache.flink.table.planner.utils.TableTestBase

 

> Connector-hbase should not depend on `flink-table-planner`
> --
>
> Key: FLINK-27785
> URL: https://issues.apache.org/jira/browse/FLINK-27785
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: Alexander Preuss
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27142) Remove bash tests dependencies on the Elasticsearch connector.

2022-06-01 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-27142:


Assignee: Alexander Preuss

> Remove bash tests dependencies on the Elasticsearch connector.
> --
>
> Key: FLINK-27142
> URL: https://issues.apache.org/jira/browse/FLINK-27142
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Alexander Fedulov
>Assignee: Alexander Preuss
>Priority: Major
>
> Elasticsearch connector is used in test_quickstart.sh and test_sql_client.sh. 
> It is desirable to prevent such cyclic dependency and remove the connector 
> usage from within Flink.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27867) SplitAggregateITCase fails on CI

2022-06-01 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27867:


 Summary: SplitAggregateITCase fails on CI
 Key: FLINK-27867
 URL: https://issues.apache.org/jira/browse/FLINK-27867
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.16.0
Reporter: Alexander Preuss


{code:java}
May 30 14:00:41 [ERROR] Failures: 
May 30 14:00:41 [ERROR] 
SplitAggregateITCase.testAggFilterClauseBothWithAvgAndCount
May 30 14:00:41 [INFO]   Run 1: PASS
May 30 14:00:41 [INFO]   Run 2: PASS
May 30 14:00:41 [INFO]   Run 3: PASS
May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
May 30 14:00:41 org.apache.flink.runtime.client.JobExecutionException: 
Job execution failed.
May 30 14:00:41 java.lang.AssertionError: 
May 30 14:00:41 [INFO] 
May 30 14:00:41 [ERROR] SplitAggregateITCase.testAggWithFilterClause
May 30 14:00:41 [INFO]   Run 1: PASS
May 30 14:00:41 [INFO]   Run 2: PASS
May 30 14:00:41 [ERROR]   Run 3: Multiple Failures (2 failures)
May 30 14:00:41 org.apache.flink.runtime.client.JobExecutionException: 
Job execution failed.
May 30 14:00:41 java.lang.AssertionError: 
May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
May 30 14:00:41 org.apache.flink.runtime.client.JobExecutionException: 
Job execution failed.
May 30 14:00:41 java.lang.AssertionError: 
May 30 14:00:41 [INFO] 
May 30 14:00:41 [ERROR] SplitAggregateITCase.testCountDistinct
May 30 14:00:41 [INFO]   Run 1: PASS
May 30 14:00:41 [INFO]   Run 2: PASS
May 30 14:00:41 [ERROR]   Run 3: Multiple Failures (2 failures)
May 30 14:00:41 org.apache.flink.runtime.client.JobExecutionException: 
Job execution failed.
May 30 14:00:41 java.lang.AssertionError: 
May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
May 30 14:00:41 org.apache.flink.runtime.client.JobExecutionException: 
Job execution failed.
May 30 14:00:41 java.lang.AssertionError: 
May 30 14:00:41 [INFO] {code}

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36178=logs=086353db-23b2-5446-2315-18e660618ef2=6cd785f3-2a2e-58a8-8e69-b4a03be28843=15593



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27867) SplitAggregateITCase fails on CI

2022-06-01 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-27867:
-
Labels: test-stability  (was: )

> SplitAggregateITCase fails on CI
> 
>
> Key: FLINK-27867
> URL: https://issues.apache.org/jira/browse/FLINK-27867
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> May 30 14:00:41 [ERROR] Failures: 
> May 30 14:00:41 [ERROR] 
> SplitAggregateITCase.testAggFilterClauseBothWithAvgAndCount
> May 30 14:00:41 [INFO]   Run 1: PASS
> May 30 14:00:41 [INFO]   Run 2: PASS
> May 30 14:00:41 [INFO]   Run 3: PASS
> May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [INFO] 
> May 30 14:00:41 [ERROR] SplitAggregateITCase.testAggWithFilterClause
> May 30 14:00:41 [INFO]   Run 1: PASS
> May 30 14:00:41 [INFO]   Run 2: PASS
> May 30 14:00:41 [ERROR]   Run 3: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [INFO] 
> May 30 14:00:41 [ERROR] SplitAggregateITCase.testCountDistinct
> May 30 14:00:41 [INFO]   Run 1: PASS
> May 30 14:00:41 [INFO]   Run 2: PASS
> May 30 14:00:41 [ERROR]   Run 3: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [INFO] {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36178=logs=086353db-23b2-5446-2315-18e660618ef2=6cd785f3-2a2e-58a8-8e69-b4a03be28843=15593



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27867) SplitAggregateITCase fails on CI

2022-06-01 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-27867:
-
Component/s: Table SQL / Planner

> SplitAggregateITCase fails on CI
> 
>
> Key: FLINK-27867
> URL: https://issues.apache.org/jira/browse/FLINK-27867
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> May 30 14:00:41 [ERROR] Failures: 
> May 30 14:00:41 [ERROR] 
> SplitAggregateITCase.testAggFilterClauseBothWithAvgAndCount
> May 30 14:00:41 [INFO]   Run 1: PASS
> May 30 14:00:41 [INFO]   Run 2: PASS
> May 30 14:00:41 [INFO]   Run 3: PASS
> May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [INFO] 
> May 30 14:00:41 [ERROR] SplitAggregateITCase.testAggWithFilterClause
> May 30 14:00:41 [INFO]   Run 1: PASS
> May 30 14:00:41 [INFO]   Run 2: PASS
> May 30 14:00:41 [ERROR]   Run 3: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [INFO] 
> May 30 14:00:41 [ERROR] SplitAggregateITCase.testCountDistinct
> May 30 14:00:41 [INFO]   Run 1: PASS
> May 30 14:00:41 [INFO]   Run 2: PASS
> May 30 14:00:41 [ERROR]   Run 3: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [ERROR]   Run 4: Multiple Failures (2 failures)
> May 30 14:00:41   org.apache.flink.runtime.client.JobExecutionException: 
> Job execution failed.
> May 30 14:00:41   java.lang.AssertionError: 
> May 30 14:00:41 [INFO] {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36178=logs=086353db-23b2-5446-2315-18e660618ef2=6cd785f3-2a2e-58a8-8e69-b4a03be28843=15593



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27779) Connectors should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-27779:


Assignee: Alexander Preuss

> Connectors should not depend on `flink-table-planner`
> -
>
> Key: FLINK-27779
> URL: https://issues.apache.org/jira/browse/FLINK-27779
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Minor
>  Labels: Connector
>
> Connector modules currently rely heavily on `flink-table-planner` as a test 
> dependency for testing the ITCases with 'DynamicTableX' using the 
> TableFactory to load the respective connector. 
> There is now a better way that only requires to have `flink-table-test-utils` 
> as a test dependency. Therefore all connectors should be migrated to using 
> the new way.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27786) Connector-hive should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27786:


 Summary: Connector-hive should not depend on `flink-table-planner`
 Key: FLINK-27786
 URL: https://issues.apache.org/jira/browse/FLINK-27786
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27784) Connector-jdbc should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-27784:
-
Priority: Minor  (was: Major)

> Connector-jdbc should not depend on `flink-table-planner`
> -
>
> Key: FLINK-27784
> URL: https://issues.apache.org/jira/browse/FLINK-27784
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Alexander Preuss
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27784) Connector-jdbc should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-27784:
-
Component/s: Connectors / JDBC

> Connector-jdbc should not depend on `flink-table-planner`
> -
>
> Key: FLINK-27784
> URL: https://issues.apache.org/jira/browse/FLINK-27784
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Alexander Preuss
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27781) Connector-kafka should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-27781:
-
Component/s: Connectors / Kafka

> Connector-kafka should not depend on `flink-table-planner`
> --
>
> Key: FLINK-27781
> URL: https://issues.apache.org/jira/browse/FLINK-27781
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka
>Reporter: Alexander Preuss
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27784) Connector-jdbc should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27784:


 Summary: Connector-jdbc should not depend on `flink-table-planner`
 Key: FLINK-27784
 URL: https://issues.apache.org/jira/browse/FLINK-27784
 Project: Flink
  Issue Type: Sub-task
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27783) Connector-aws-kinesis should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27783:


 Summary: Connector-aws-kinesis should not depend on 
`flink-table-planner`
 Key: FLINK-27783
 URL: https://issues.apache.org/jira/browse/FLINK-27783
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kinesis
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27785) Connector-hbase should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27785:


 Summary: Connector-hbase should not depend on `flink-table-planner`
 Key: FLINK-27785
 URL: https://issues.apache.org/jira/browse/FLINK-27785
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / HBase
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27782) Connector-kinesis should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27782:


 Summary: Connector-kinesis should not depend on 
`flink-table-planner`
 Key: FLINK-27782
 URL: https://issues.apache.org/jira/browse/FLINK-27782
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Kinesis
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27781) Connector-kafka should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27781:


 Summary: Connector-kafka should not depend on `flink-table-planner`
 Key: FLINK-27781
 URL: https://issues.apache.org/jira/browse/FLINK-27781
 Project: Flink
  Issue Type: Sub-task
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27780) Connector-cassandra should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27780:


 Summary: Connector-cassandra should not depend on 
`flink-table-planner`
 Key: FLINK-27780
 URL: https://issues.apache.org/jira/browse/FLINK-27780
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Cassandra
Reporter: Alexander Preuss






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27779) Connectors should not depend on `flink-table-planner`

2022-05-25 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27779:


 Summary: Connectors should not depend on `flink-table-planner`
 Key: FLINK-27779
 URL: https://issues.apache.org/jira/browse/FLINK-27779
 Project: Flink
  Issue Type: Technical Debt
Affects Versions: 1.16.0
Reporter: Alexander Preuss


Connector modules currently rely heavily on `flink-table-planner` as a test 
dependency for testing the ITCases with 'DynamicTableX' using the TableFactory 
to load the respective connector. 

There is now a better way that only requires to have `flink-table-test-utils` 
as a test dependency. Therefore all connectors should be migrated to using the 
new way.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-23 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-27725:
-
Description: 
During the implementation of FLINK-27527 we had to add `flink-table-planner` as 
a test dependency to `flink-tests`.

This created test failures for the reflection tests checking test coverage for 
`TypeInformation` and `TypeSerializer` classes as shown here:

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]

To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
temporary solution but there should be a better solution in place. Some of the 
classes are deprecated but others are not and should probably have tests 
created to improve the coverage.

  was:
During the implementation of FLINK-27527 we had to add `flink-table-planner` as 
a dependency to `flink-tests`.

This created test failures for the reflection tests checking test coverage for 
`TypeInformation` and `TypeSerializer` classes as shown here:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]



To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
temporary solution but there should be a better solution in place. Some of the 
classes are deprecated but others are not and should probably have tests 
created to improve the coverage.


> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a test dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-23 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540897#comment-17540897
 ] 

Alexander Preuss edited comment on FLINK-27725 at 5/23/22 11:10 AM:


Maybe the initial description was ambiguous. The `flink-table-planner` 
dependency we are talking about always referred to a test dependency, it is not 
a compile dependency for `flink-test` or any other connector right now. For all 
connectors the planner is included as a test dependency because without it 
either the ExecutorFactory is not present or, using the 
`flink-table-planner-loader`, we run into exceptions as I described above.

I agree that an upsert sink only makes sense in Tableland, but I think the idea 
of having a very simple file-based sink for general testing scenarios is 
valuable for DataStream as well. The alternative option available to developers 
right now is the FileSink which is a lot more complex.


was (Author: alexanderpreuss):
Maybe the initial description was ambiguous. The `flink-table-planner` 
dependency we are talking about always referred to a test dependency, it is not 
a compile dependency for `flink-test` or any other connector right now. For all 
connectors the planner is included as a test dependency because without it 
either the ExecutorFactory is not present or, using the 
`flink-table-planner-loader` we run into exceptions as I described above.

I agree that an upsert sink only makes sense in Tableland, but I think the idea 
of having a very simple file-based sink for general testing scenarios is 
valuable for DataStream as well. The alternative option available to developers 
right now is the FileSink which is a lot more complex.

> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-23 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540897#comment-17540897
 ] 

Alexander Preuss commented on FLINK-27725:
--

Maybe the initial description was ambiguous. The `flink-table-planner` 
dependency we are talking about always referred to a test dependency, it is not 
a compile dependency for `flink-test` or any other connector right now. For all 
connectors the planner is included as a test dependency because without it 
either the ExecutorFactory is not present or, using the 
`flink-table-planner-loader` we run into exceptions as I described above.

I agree that an upsert sink only makes sense in Tableland, but I think the idea 
of having a very simple file-based sink for general testing scenarios is 
valuable for DataStream as well. The alternative option available to developers 
right now is the FileSink which is a lot more complex.

> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-23 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540871#comment-17540871
 ] 

Alexander Preuss commented on FLINK-27725:
--

I just went through the other connectors and they all depend on 
`flink-table-planner` :/ 

 

Regarding the `flink-table-test-utils` module we could move it there but the 
connector is not table-only so I'm not sure if this is a good idea. WDYT?

> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-23 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540852#comment-17540852
 ] 

Alexander Preuss commented on FLINK-27725:
--

[~twalthr] I just tried exchanging the planner for `flink-table-planner-loader` 
but then I run into this exception when running the test: 

`Class 
'org.apache.flink.table.shaded.com.jayway.jsonpath.spi.json.JsonProvider' not 
found. Perhaps you forgot to add the module 'flink-table-runtime' to the 
classpath?`

Is there anything else I need to add?

> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-23 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540824#comment-17540824
 ] 

Alexander Preuss commented on FLINK-27725:
--

[~twalthr] we need the `ExecutorFactory` to be present to run IT cases for 
DynamicTableFactory. As far as we could see the ExecutorFactory is only present 
in flink-table-planner. Until now all connectors have to depend on the planner 
because of this. Is there already an alternative that we are not aware of?

> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (FLINK-27527) Create a file-based Upsert sink for testing internal components

2022-05-20 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-27527:


Assignee: Alexander Preuss

> Create a file-based Upsert sink for testing internal components
> ---
>
> Key: FLINK-27527
> URL: https://issues.apache.org/jira/browse/FLINK-27527
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Major
>  Labels: pull-request-available
>
> There are a bunch of tests that in order to ensure correctness of their 
> tested component rely on a Sink providing upserts. These tests (e.g. 
> test-sql-client.sh) mostly use the ElasticsearchSink which is a lot of 
> overhead. We want to provide a simple file-based upsert sink for Flink 
> developers to test their components against. The sink should be very simple 
> and is not supposed to be used in production scenarios but rather just to 
> facilitate easier testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-20 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17540127#comment-17540127
 ] 

Alexander Preuss commented on FLINK-27725:
--

CC: [~twalthr] 

> Create tests for TypeInformation and TypeSerializer classes in table
> 
>
> Key: FLINK-27725
> URL: https://issues.apache.org/jira/browse/FLINK-27725
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Priority: Major
>
> During the implementation of FLINK-27527 we had to add `flink-table-planner` 
> as a dependency to `flink-tests`.
> This created test failures for the reflection tests checking test coverage 
> for `TypeInformation` and `TypeSerializer` classes as shown here:
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]
> To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
> temporary solution but there should be a better solution in place. Some of 
> the classes are deprecated but others are not and should probably have tests 
> created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27725) Create tests for TypeInformation and TypeSerializer classes in table

2022-05-20 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27725:


 Summary: Create tests for TypeInformation and TypeSerializer 
classes in table
 Key: FLINK-27725
 URL: https://issues.apache.org/jira/browse/FLINK-27725
 Project: Flink
  Issue Type: Technical Debt
  Components: Table SQL / Runtime
Affects Versions: 1.16.0
Reporter: Alexander Preuss


During the implementation of FLINK-27527 we had to add `flink-table-planner` as 
a dependency to `flink-tests`.

This created test failures for the reflection tests checking test coverage for 
`TypeInformation` and `TypeSerializer` classes as shown here:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35851=logs=56de72e1-1902-5ae5-06bd-77ee907eed59=237d25ca-be06-5918-2b0a-41d0694dace8=6738]



To mitigate the issue and unblock FLINK-27527 we extended the whitelist as a 
temporary solution but there should be a better solution in place. Some of the 
classes are deprecated but others are not and should probably have tests 
created to improve the coverage.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (FLINK-27664) cron_snapshot_deployment_maven fails due to JavaDoc building error

2022-05-17 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss closed FLINK-27664.

Resolution: Duplicate

> cron_snapshot_deployment_maven fails due to JavaDoc building error
> --
>
> Key: FLINK-27664
> URL: https://issues.apache.org/jira/browse/FLINK-27664
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.15.0
>Reporter: Martijn Visser
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35684=logs=eca6b3a6-1600-56cc-916a-c549b3cde3ff=e9844b5e-5aa3-546b-6c3e-5395c7c0cac7=14026
> {code:bash}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:jar (attach-javadocs) on 
> project flink-architecture-tests-production: MavenReportException: Error 
> while creating archive:
> [ERROR] Exit code: 1 - javadoc: error - class file for 
> org.junit.platform.commons.annotation.Testable not found
> [ERROR] 
> [ERROR] Command line was: 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/../bin/javadoc -Xdoclint:none @options 
> @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/__w/1/s/flink-architecture-tests/flink-architecture-tests-production/target/apidocs'
>  dir.
> [ERROR] -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27664) cron_snapshot_deployment_maven fails due to JavaDoc building error

2022-05-17 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17538132#comment-17538132
 ] 

Alexander Preuss commented on FLINK-27664:
--

Indeed, I created a backport to 1.15 with the id of the original PR

> cron_snapshot_deployment_maven fails due to JavaDoc building error
> --
>
> Key: FLINK-27664
> URL: https://issues.apache.org/jira/browse/FLINK-27664
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.15.0
>Reporter: Martijn Visser
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=35684=logs=eca6b3a6-1600-56cc-916a-c549b3cde3ff=e9844b5e-5aa3-546b-6c3e-5395c7c0cac7=14026
> {code:bash}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:jar (attach-javadocs) on 
> project flink-architecture-tests-production: MavenReportException: Error 
> while creating archive:
> [ERROR] Exit code: 1 - javadoc: error - class file for 
> org.junit.platform.commons.annotation.Testable not found
> [ERROR] 
> [ERROR] Command line was: 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/../bin/javadoc -Xdoclint:none @options 
> @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/__w/1/s/flink-architecture-tests/flink-architecture-tests-production/target/apidocs'
>  dir.
> [ERROR] -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-25777) Generate documentation for Table factories (formats and connectors)

2022-05-16 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17537522#comment-17537522
 ] 

Alexander Preuss commented on FLINK-25777:
--

As a first step we investigated how we could automate the generation of the 
docs from the current {{{}ConnectorOption{}}}s classes with hugo. Our 
experimental setup looked like this:

1) Symlink a specific ConnectorOptions java file into hugos /{{{}static{}}} 
folder

2) Create a short code to process the file like this
{code:java}
{{ $optionsFile := os.ReadFile 
"static/connector-options/ElasticsearchConnectorOptions.java" }}
{{ findRE "ConfigOption<(.|\n)*?;" $optionsFile }} {code}
3) Include the shortcode on the documentation page for the connector. This 
produces output like this:


{code:java}
[ConfigOption> HOSTS_OPTION = ConfigOptions.key("hosts") 
.stringType() .asList() .noDefaultValue() .withDescription("Elasticsearch hosts 
to connect to."); ConfigOption INDEX_OPTION = 
ConfigOptions.key("index") .stringType() .noDefaultValue() 
.withDescription("Elasticsearch index for every record.");  ] {code}
One major issue with this approach is that using regexes to parse the 
individual Options and their building blocks (description, default value) is 
super brittle. Another issue is that for values that are very long the output 
would have to merge the individual description lines as they contain the " + " 
signs from line breaks. This also means any references variable in the code 
would appear as is in the docs and would not be resolved to its actual value.

After a short discussion we also thought about linking to the ConnectorOption's 
java doc instead of creating the table. While this might be a valid approach, 
the current java docs lack the information seen in the code as can be seen on 
this example 
([https://nightlies.apache.org/flink/flink-docs-master/api/java/org/apache/flink/streaming/connectors/elasticsearch/table/ElasticsearchConnectorOptions.html#BULK_FLASH_MAX_SIZE_OPTION])
 which only shows the return type but includes no statements about the default 
value or provided description. 
Maybe improving the JavaDoc output generated from ConnectorOptions classes 
would be a first step in the right direction. For now we concluded to leave the 
manual creation of the option tables as is.

> Generate documentation for Table factories (formats and connectors)
> ---
>
> Key: FLINK-25777
> URL: https://issues.apache.org/jira/browse/FLINK-25777
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Common, Documentation, Formats (JSON, Avro, 
> Parquet, ORC, SequenceFile)
>Reporter: Francesco Guardiani
>Priority: Critical
>
> The goal of this issue is to generate automatically from code the 
> documentation of configuration options for table connectors and formats.
> This issue includes:
> * Tweak {{ConfigOptionsDocGenerator}} to work with 
> {{Factory#requiredOptions}}, {{Factory#requiredOptions}} and newly introduced 
> {{DynamicTableFactory#forwardOptions}} and {{FormatFactory#forwardOptions}}. 
> Also see this https://github.com/apache/flink/pull/18290 as reference. From 
> these methods we should extract if an option is required or not, and if it's 
> forwardable or not.
> * Decide whether the generator output should be, and how to link/include it 
> in the connector/format docs pages.
> * Enable the code generator in CI.
> * Regenerate all the existing docs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27527) Create a file-based Upsert sink for testing internal components

2022-05-06 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27527:


 Summary: Create a file-based Upsert sink for testing internal 
components
 Key: FLINK-27527
 URL: https://issues.apache.org/jira/browse/FLINK-27527
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Common
Affects Versions: 1.16.0
Reporter: Alexander Preuss


There are a bunch of tests that in order to ensure correctness of their tested 
component rely on a Sink providing upserts. These tests (e.g. 
test-sql-client.sh) mostly use the ElasticsearchSink which is a lot of 
overhead. We want to provide a simple file-based upsert sink for Flink 
developers to test their components against. The sink should be very simple and 
is not supposed to be used in production scenarios but rather just to 
facilitate easier testing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27410) Create ArchUnit rules for Public API dependencies

2022-04-26 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27410:


 Summary: Create ArchUnit rules for Public API dependencies
 Key: FLINK-27410
 URL: https://issues.apache.org/jira/browse/FLINK-27410
 Project: Flink
  Issue Type: Improvement
Affects Versions: 1.15.1
Reporter: Alexander Preuss


During the development of FLINK-27048 we noticed that there are parts of the 
Public API that depend on Flink classes which are public but do not have any 
@PublicEvolving or @Public annotation. An example of this is 
`org.apache.flink.configuration.description.TextElement.text` which is used in 
most ConnectorOptions, We want to create a new ArchUnit rule to find these 
missing annotations so we can fix them.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-27054) Elasticsearch SQL connector SSL issue

2022-04-05 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17517288#comment-17517288
 ] 

Alexander Preuss commented on FLINK-27054:
--

[~martijnvisser] AFAIK the Elastic client does not offer such a mechanism. 
Configuring SSL is pretty clunky (even with the new Java Client) 

> Elasticsearch SQL connector SSL issue
> -
>
> Key: FLINK-27054
> URL: https://issues.apache.org/jira/browse/FLINK-27054
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Reporter: ricardo
>Priority: Major
>
> The current Flink ElasticSearch SQL connector 
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/
>  is missing SSL options, can't connect to ES clusters which require SSL 
> certificate.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-27048) Add ArchUnit tests for Elasticsearch to only depend on public API

2022-04-04 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-27048:


Assignee: Alexander Preuss

> Add ArchUnit tests for Elasticsearch to only depend on public API
> -
>
> Key: FLINK-27048
> URL: https://issues.apache.org/jira/browse/FLINK-27048
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Major
>
> We want to ensure that the Elasticsearch connectors class level dependencies 
> are part of the public API. To do this we want to extend the existing 
> ArchUnit tests.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-27048) Add ArchUnit tests for Elasticsearch to only depend on public API

2022-04-04 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-27048:


 Summary: Add ArchUnit tests for Elasticsearch to only depend on 
public API
 Key: FLINK-27048
 URL: https://issues.apache.org/jira/browse/FLINK-27048
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Affects Versions: 1.16.0
Reporter: Alexander Preuss


We want to ensure that the Elasticsearch connectors class level dependencies 
are part of the public API. To do this we want to extend the existing ArchUnit 
tests.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26810) The local time zone does not take effect when the dynamic index uses a field of type timestamp_ltz

2022-03-31 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-26810:


Assignee: Alexander Preuss

> The local time zone does not take effect when the dynamic index uses a field 
> of type timestamp_ltz
> --
>
> Key: FLINK-26810
> URL: https://issues.apache.org/jira/browse/FLINK-26810
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Table SQL / Planner, Table 
> SQL / Runtime
>Reporter: jinfeng
>Assignee: Alexander Preuss
>Priority: Major
> Attachments: 截屏2022-03-23 上午12.48.02.png
>
>
> When using  TIMESTAMP_WITH_LOCAL_TIMEZONE field to generate a dynamic index,  
> it will alway use UTC timezone.   
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26281) Test Elasticsearch connector End2End

2022-03-22 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17510505#comment-17510505
 ] 

Alexander Preuss commented on FLINK-26281:
--

[~guoyangze] I just checked and saw what you meant, good catch thanks! I just 
opened PRs for removing it against master and 1.15

[https://github.com/apache/flink/pull/19202]

[https://github.com/apache/flink/pull/19203]

 

> Test Elasticsearch connector End2End
> 
>
> Key: FLINK-26281
> URL: https://issues.apache.org/jira/browse/FLINK-26281
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Konstantin Knauf
>Priority: Blocker
>  Labels: pull-request-available, release-testing
> Fix For: 1.15.0
>
>
> Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323
> Documentation for [datastream 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/elasticsearch/]
> Documentation for [table 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/]
> As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
> introduces the new connector based on the Sink interface we should test it 
> behaves correctly and as the user expects.
>  
> Some suggestions what to test:
>  * Test delivery guarantees (none, at-least-once) (exactly-once should not 
> run)
>  * Write a simple job that is inserting/upserting data into Elasticsearch
>  * Write a simple job that is inserting/upserting data into Elasticsearch and 
> use a non-default parallelism
>  * Write a simple job in both datastream api and table api
>  * Test restarting jobs and scaling up/down
>  * Test failure of a simple job that is inserting data with exactly-once 
> delivery guarantee by terminating and restarting Elasticsearch
>  * Test against Elasticsearch 6.X and 7.X with the respective connectors



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26281) Test Elasticsearch connector End2End

2022-03-22 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17510489#comment-17510489
 ] 

Alexander Preuss commented on FLINK-26281:
--

[~knaufk] Thank you very much for the testing! I opened a 
[PR|https://github.com/apache/flink/pull/19200] for the docs changes but 
unfortunately I am not at all familiar with ES datastreams. Maybe you or Kostas 
can add the information for that bullet point there? 

> Test Elasticsearch connector End2End
> 
>
> Key: FLINK-26281
> URL: https://issues.apache.org/jira/browse/FLINK-26281
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Konstantin Knauf
>Priority: Blocker
>  Labels: pull-request-available, release-testing
> Fix For: 1.15.0
>
>
> Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323
> Documentation for [datastream 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/elasticsearch/]
> Documentation for [table 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/]
> As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
> introduces the new connector based on the Sink interface we should test it 
> behaves correctly and as the user expects.
>  
> Some suggestions what to test:
>  * Test delivery guarantees (none, at-least-once) (exactly-once should not 
> run)
>  * Write a simple job that is inserting/upserting data into Elasticsearch
>  * Write a simple job that is inserting/upserting data into Elasticsearch and 
> use a non-default parallelism
>  * Write a simple job in both datastream api and table api
>  * Test restarting jobs and scaling up/down
>  * Test failure of a simple job that is inserting data with exactly-once 
> delivery guarantee by terminating and restarting Elasticsearch
>  * Test against Elasticsearch 6.X and 7.X with the respective connectors



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26281) Test Elasticsearch connector End2End

2022-03-22 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17510376#comment-17510376
 ] 

Alexander Preuss commented on FLINK-26281:
--

Hi [~guoyangze], yes you can use all of the old connector options as we 
reverted the table API part to how it was previously. For datastream you can 
set connection request timeout, connection timeout and socket timeout instead :)

> Test Elasticsearch connector End2End
> 
>
> Key: FLINK-26281
> URL: https://issues.apache.org/jira/browse/FLINK-26281
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Konstantin Knauf
>Priority: Blocker
>  Labels: pull-request-available, release-testing
> Fix For: 1.15.0
>
>
> Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323
> Documentation for [datastream 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/elasticsearch/]
> Documentation for [table 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/]
> As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
> introduces the new connector based on the Sink interface we should test it 
> behaves correctly and as the user expects.
>  
> Some suggestions what to test:
>  * Test delivery guarantees (none, at-least-once) (exactly-once should not 
> run)
>  * Write a simple job that is inserting/upserting data into Elasticsearch
>  * Write a simple job that is inserting/upserting data into Elasticsearch and 
> use a non-default parallelism
>  * Write a simple job in both datastream api and table api
>  * Test restarting jobs and scaling up/down
>  * Test failure of a simple job that is inserting data with exactly-once 
> delivery guarantee by terminating and restarting Elasticsearch
>  * Test against Elasticsearch 6.X and 7.X with the respective connectors



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26638) Reintroduce ActionFailureHandler for Elasticsearch sink connectors

2022-03-16 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-26638:


Assignee: Alexander Preuss

> Reintroduce ActionFailureHandler for Elasticsearch sink connectors
> --
>
> Key: FLINK-26638
> URL: https://issues.apache.org/jira/browse/FLINK-26638
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.16.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Major
>  Labels: pull-request-available
>
> In FLINK-26281 we found out that users depend on the ActionFailureHandler 
> that was not ported over to the new unified Sink. We not want to add the 
> failure handler back.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26634) Update Chinese version of Elasticsearch connector docs

2022-03-15 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17506768#comment-17506768
 ] 

Alexander Preuss commented on FLINK-26634:
--

Hi [~chenzihao] you can open the PR already, the 19035 PR will likely be merged 
today 

> Update Chinese version of Elasticsearch connector docs
> --
>
> Key: FLINK-26634
> URL: https://issues.apache.org/jira/browse/FLINK-26634
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: chenzihao
>Priority: Major
>
> In [https://github.com/apache/flink/pull/19035] we made some smaller changes 
> to the documentation for the Elasticsearch connector with regards to the 
> delivery guarantee. These changes still not to be ported over to the chinese 
> docs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26638) Reintroduce ActionFailureHandler for Elasticsearch sink connectors

2022-03-14 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-26638:


 Summary: Reintroduce ActionFailureHandler for Elasticsearch sink 
connectors
 Key: FLINK-26638
 URL: https://issues.apache.org/jira/browse/FLINK-26638
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / ElasticSearch
Affects Versions: 1.16.0
Reporter: Alexander Preuss


In FLINK-26281 we found out that users depend on the ActionFailureHandler that 
was not ported over to the new unified Sink. We not want to add the failure 
handler back.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26634) Update Chinese version of Elasticsearch connector docs

2022-03-14 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-26634:


Assignee: Zihao Chen

> Update Chinese version of Elasticsearch connector docs
> --
>
> Key: FLINK-26634
> URL: https://issues.apache.org/jira/browse/FLINK-26634
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Zihao Chen
>Priority: Major
>
> In [https://github.com/apache/flink/pull/19035] we made some smaller changes 
> to the documentation for the Elasticsearch connector with regards to the 
> delivery guarantee. These changes still not to be ported over to the chinese 
> docs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26634) Update Chinese version of Elasticsearch connector docs

2022-03-14 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17506196#comment-17506196
 ] 

Alexander Preuss commented on FLINK-26634:
--

Hi [~chenzihao] thank you for picking up the task so quickly, I assigned it to 
you :)

> Update Chinese version of Elasticsearch connector docs
> --
>
> Key: FLINK-26634
> URL: https://issues.apache.org/jira/browse/FLINK-26634
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Zihao Chen
>Priority: Major
>
> In [https://github.com/apache/flink/pull/19035] we made some smaller changes 
> to the documentation for the Elasticsearch connector with regards to the 
> delivery guarantee. These changes still not to be ported over to the chinese 
> docs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26634) Update Chinese version of Elasticsearch connector docs

2022-03-14 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-26634:


 Summary: Update Chinese version of Elasticsearch connector docs
 Key: FLINK-26634
 URL: https://issues.apache.org/jira/browse/FLINK-26634
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Documentation
Affects Versions: 1.15.0
Reporter: Alexander Preuss


In [https://github.com/apache/flink/pull/19035] we made some smaller changes to 
the documentation for the Elasticsearch connector with regards to the delivery 
guarantee. These changes still not to be ported over to the chinese docs.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-26322) Test FileSink compaction manually

2022-03-14 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss resolved FLINK-26322.
--
Resolution: Fixed

> Test FileSink compaction manually
> -
>
> Key: FLINK-26322
> URL: https://issues.apache.org/jira/browse/FLINK-26322
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Documentation of compaction on FileSink: 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/filesystem/#compaction]
> Possible scenarios might include
>  # Enable compaction with file-size based compaction strategy.
>  # Enable compaction with number-checkpoints based compaction strategy.
>  # Enable compaction, stop-with-savepoint and restarted with compaction 
> disabled.
>  # Disable compaction, stop-with-savepoint and restarted with compaction 
> enabled.
> For each scenario, it might need to verify that
>  # No repeat and missed records.
>  # The resulted files' size exceeds the specified condition.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26281) Test Elasticsearch connector End2End

2022-03-10 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504138#comment-17504138
 ] 

Alexander Preuss commented on FLINK-26281:
--

Hi [~guoyangze], for the failure handler, can you confirm wether they are using 
the sink in datastream or table API? Are they implementing a custom 
ActionRequestFailureHandler or using the provided ones (IngoringFailure, NoOp, 
RetryRejectedExecution)?

> Test Elasticsearch connector End2End
> 
>
> Key: FLINK-26281
> URL: https://issues.apache.org/jira/browse/FLINK-26281
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Konstantin Knauf
>Priority: Blocker
>  Labels: pull-request-available, release-testing
> Fix For: 1.15.0
>
>
> Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323
> Documentation for [datastream 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/elasticsearch/]
> Documentation for [table 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/]
> As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
> introduces the new connector based on the Sink interface we should test it 
> behaves correctly and as the user expects.
>  
> Some suggestions what to test:
>  * Test delivery guarantees (none, at-least-once) (exactly-once should not 
> run)
>  * Write a simple job that is inserting/upserting data into Elasticsearch
>  * Write a simple job that is inserting/upserting data into Elasticsearch and 
> use a non-default parallelism
>  * Write a simple job in both datastream api and table api
>  * Test restarting jobs and scaling up/down
>  * Test failure of a simple job that is inserting data with exactly-once 
> delivery guarantee by terminating and restarting Elasticsearch
>  * Test against Elasticsearch 6.X and 7.X with the respective connectors



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26322) Test FileSink compaction manually

2022-03-10 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17504074#comment-17504074
 ] 

Alexander Preuss commented on FLINK-26322:
--

Hi guys, I tested against master with commit 26d7c09b. Great to hear that 3 is 
already in progress. For 4 it gets stuck after the restart.
The job is a very simple one, using enableCompactionOnCheckpoint(2) and 
env.fromSequence to generate some data

> Test FileSink compaction manually
> -
>
> Key: FLINK-26322
> URL: https://issues.apache.org/jira/browse/FLINK-26322
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Documentation of compaction on FileSink: 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/filesystem/#compaction]
> Possible scenarios might include
>  # Enable compaction with file-size based compaction strategy.
>  # Enable compaction with number-checkpoints based compaction strategy.
>  # Enable compaction, stop-with-savepoint and restarted with compaction 
> disabled.
>  # Disable compaction, stop-with-savepoint and restarted with compaction 
> enabled.
> For each scenario, it might need to verify that
>  # No repeat and missed records.
>  # The resulted files' size exceeds the specified condition.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26322) Test FileSink compaction manually

2022-03-09 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17503602#comment-17503602
 ] 

Alexander Preuss commented on FLINK-26322:
--

I tested the FileSink's compaction and here are my observations for the 
scenarios:
 # Working as expected
 # Working as expected
 # Throws  java.lang.IllegalStateException: Illegal committable to compact, 
pending file is null
 # Does not finish/seems to be stuck. No logs after the job has been submitted

> Test FileSink compaction manually
> -
>
> Key: FLINK-26322
> URL: https://issues.apache.org/jira/browse/FLINK-26322
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Documentation of compaction on FileSink: 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/filesystem/#compaction]
> Possible scenarios might include
>  # Enable compaction with file-size based compaction strategy.
>  # Enable compaction with number-checkpoints based compaction strategy.
>  # Enable compaction, stop-with-savepoint and restarted with compaction 
> disabled.
>  # Disable compaction, stop-with-savepoint and restarted with compaction 
> enabled.
> For each scenario, it might need to verify that
>  # No repeat and missed records.
>  # The resulted files' size exceeds the specified condition.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26281) Test Elasticsearch connector End2End

2022-03-07 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17502173#comment-17502173
 ] 

Alexander Preuss commented on FLINK-26281:
--

[~knaufk] you can also try killing the taskmanager :)

> Test Elasticsearch connector End2End
> 
>
> Key: FLINK-26281
> URL: https://issues.apache.org/jira/browse/FLINK-26281
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Konstantin Knauf
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323
> Documentation for [datastream 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/elasticsearch/]
> Documentation for [table 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/]
> As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
> introduces the new connector based on the Sink interface we should test it 
> behaves correctly and as the user expects.
>  
> Some suggestions what to test:
>  * Test delivery guarantees (none, at-least-once) (exactly-once should not 
> run)
>  * Write a simple job that is inserting/upserting data into Elasticsearch
>  * Write a simple job that is inserting/upserting data into Elasticsearch and 
> use a non-default parallelism
>  * Write a simple job in both datastream api and table api
>  * Test restarting jobs and scaling up/down
>  * Test failure of a simple job that is inserting data with exactly-once 
> delivery guarantee by terminating and restarting Elasticsearch
>  * Test against Elasticsearch 6.X and 7.X with the respective connectors



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26321) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure

2022-03-04 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss closed FLINK-26321.

Resolution: Duplicate

> KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure
> --
>
> Key: FLINK-26321
> URL: https://issues.apache.org/jira/browse/FLINK-26321
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=769=logs=d543d572-9428-5803-a30c-e8e09bf70915=4e4199a3-fbbb-5d5b-a2be-802955ffb013=35635]
>  failed due to a test failure in 
> {{KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee}}:
> {code}
> Feb 22 21:55:15 [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 65.897 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase
> Feb 22 21:55:15 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee
>   Time elapsed: 25.251 s  <<< FAILURE!
> Feb 22 21:55:15 java.lang.AssertionError: expected:<20912> but was:<20913>
> Feb 22 21:55:15   at org.junit.Assert.fail(Assert.java:89)
> Feb 22 21:55:15   at org.junit.Assert.failNotEquals(Assert.java:835)
> Feb 22 21:55:15   at org.junit.Assert.assertEquals(Assert.java:647)
> Feb 22 21:55:15   at org.junit.Assert.assertEquals(Assert.java:633)
> Feb 22 21:55:15   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:424)
> Feb 22 21:55:15   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:231)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26322) Test FileSink compaction manually

2022-03-03 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-26322:


Assignee: Alexander Preuss  (was: Alexander Fedulov)

> Test FileSink compaction manually
> -
>
> Key: FLINK-26322
> URL: https://issues.apache.org/jira/browse/FLINK-26322
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.15.0
>
>
> Documentation of compaction on FileSink: 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/filesystem/#compaction]
> Possible scenarios might include
>  # Enable compaction with file-size based compaction strategy.
>  # Enable compaction with number-checkpoints based compaction strategy.
>  # Enable compaction, stop-with-savepoint and restarted with compaction 
> disabled.
>  # Disable compaction, stop-with-savepoint and restarted with compaction 
> enabled.
> For each scenario, it might need to verify that
>  # No repeat and missed records.
>  # The resulted files' size exceeds the specified condition.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26393) KafkaITCase.testBrokerFailure fails due to a topic isn't cleaned up properly

2022-03-02 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17500097#comment-17500097
 ] 

Alexander Preuss commented on FLINK-26393:
--

This issue has the same root cause as FLINK-26387. The exception that the topic 
already exists is coming from the tests retry mechanism. The actual root cause, 
as is the case in FLINK-26387, is actually that the `ValidatingExactlyOnceSink` 
does not see the expected number of records and thus the tests gets stuck. 

> KafkaITCase.testBrokerFailure fails due to a topic isn't cleaned up properly 
> -
>
> Key: FLINK-26393
> URL: https://issues.apache.org/jira/browse/FLINK-26393
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> [This 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32281=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=36306]
>  is (not exclusively) failing because of {{KafkaITCase.testBrokerFailure}}:
> {code}
> Feb 28 08:54:10 [ERROR] 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testBrokerFailure  
> Time elapsed: 60.196 s  <<< FAILURE!
> Feb 28 08:54:10 java.lang.AssertionError: Create test topic : 
> brokerFailureTestTopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'brokerFailureTestTopic' already exists.
> Feb 28 08:54:10   at org.junit.Assert.fail(Assert.java:89)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:208)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:97)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBrokerFailureTest(KafkaConsumerTestBase.java:1471)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testBrokerFailure(KafkaITCase.java:112)
> Feb 28 08:54:10   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 28 08:54:10   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 28 08:54:10   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 28 08:54:10   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 28 08:54:10   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 28 08:54:10   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Feb 28 08:54:10   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Feb 28 08:54:10   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26393) KafkaITCase.testBrokerFailure fails due to a topic isn't cleaned up properly

2022-03-02 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-26393:


Assignee: Alexander Preuss  (was: Fabian Paul)

> KafkaITCase.testBrokerFailure fails due to a topic isn't cleaned up properly 
> -
>
> Key: FLINK-26393
> URL: https://issues.apache.org/jira/browse/FLINK-26393
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> [This 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=32281=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=36306]
>  is (not exclusively) failing because of {{KafkaITCase.testBrokerFailure}}:
> {code}
> Feb 28 08:54:10 [ERROR] 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testBrokerFailure  
> Time elapsed: 60.196 s  <<< FAILURE!
> Feb 28 08:54:10 java.lang.AssertionError: Create test topic : 
> brokerFailureTestTopic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 
> 'brokerFailureTestTopic' already exists.
> Feb 28 08:54:10   at org.junit.Assert.fail(Assert.java:89)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:208)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:97)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runBrokerFailureTest(KafkaConsumerTestBase.java:1471)
> Feb 28 08:54:10   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testBrokerFailure(KafkaITCase.java:112)
> Feb 28 08:54:10   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Feb 28 08:54:10   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Feb 28 08:54:10   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Feb 28 08:54:10   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 28 08:54:10   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Feb 28 08:54:10   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Feb 28 08:54:10   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Feb 28 08:54:10   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Feb 28 08:54:10   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26191) Incorrect license in Elasticsearch connectors

2022-03-01 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17499415#comment-17499415
 ] 

Alexander Preuss commented on FLINK-26191:
--

[~gaoyunhaii] we reverted to an older version and are just waiting for CI to 
pass

> Incorrect license in Elasticsearch connectors
> -
>
> Key: FLINK-26191
> URL: https://issues.apache.org/jira/browse/FLINK-26191
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Alexander Preuss
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The sql-connector-elasticsearc0h 6/7 connector NOTICE lists the elasticsearch 
> dependencies as ASLv2, but they are nowadays (at least in part) licensed 
> differently (dual-licensed under elastic license 2.0 & Server Side Public 
> License (SSPL)).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26321) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure

2022-02-28 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17498956#comment-17498956
 ] 

Alexander Preuss commented on FLINK-26321:
--

I had a first look into it and there is actually not an extra record but a 
missing one. The parameters for the Assertion are in the wrong order. So 
really, we always miss one record of the total

> KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure
> --
>
> Key: FLINK-26321
> URL: https://issues.apache.org/jira/browse/FLINK-26321
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=769=logs=d543d572-9428-5803-a30c-e8e09bf70915=4e4199a3-fbbb-5d5b-a2be-802955ffb013=35635]
>  failed due to a test failure in 
> {{KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee}}:
> {code}
> Feb 22 21:55:15 [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 65.897 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase
> Feb 22 21:55:15 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee
>   Time elapsed: 25.251 s  <<< FAILURE!
> Feb 22 21:55:15 java.lang.AssertionError: expected:<20912> but was:<20913>
> Feb 22 21:55:15   at org.junit.Assert.fail(Assert.java:89)
> Feb 22 21:55:15   at org.junit.Assert.failNotEquals(Assert.java:835)
> Feb 22 21:55:15   at org.junit.Assert.assertEquals(Assert.java:647)
> Feb 22 21:55:15   at org.junit.Assert.assertEquals(Assert.java:633)
> Feb 22 21:55:15   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:424)
> Feb 22 21:55:15   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:231)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-26321) KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure

2022-02-28 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-26321:


Assignee: Alexander Preuss

> KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee fails on azure
> --
>
> Key: FLINK-26321
> URL: https://issues.apache.org/jira/browse/FLINK-26321
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=769=logs=d543d572-9428-5803-a30c-e8e09bf70915=4e4199a3-fbbb-5d5b-a2be-802955ffb013=35635]
>  failed due to a test failure in 
> {{KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee}}:
> {code}
> Feb 22 21:55:15 [ERROR] Tests run: 8, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 65.897 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase
> Feb 22 21:55:15 [ERROR] 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee
>   Time elapsed: 25.251 s  <<< FAILURE!
> Feb 22 21:55:15 java.lang.AssertionError: expected:<20912> but was:<20913>
> Feb 22 21:55:15   at org.junit.Assert.fail(Assert.java:89)
> Feb 22 21:55:15   at org.junit.Assert.failNotEquals(Assert.java:835)
> Feb 22 21:55:15   at org.junit.Assert.assertEquals(Assert.java:647)
> Feb 22 21:55:15   at org.junit.Assert.assertEquals(Assert.java:633)
> Feb 22 21:55:15   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.writeRecordsToKafka(KafkaSinkITCase.java:424)
> Feb 22 21:55:15   at 
> org.apache.flink.connector.kafka.sink.KafkaSinkITCase.testWriteRecordsToKafkaWithExactlyOnceGuarantee(KafkaSinkITCase.java:231)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26254) KafkaSink might violate order of sequence numbers and risk exactly-once processing

2022-02-24 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17497451#comment-17497451
 ] 

Alexander Preuss commented on FLINK-26254:
--

Added some testcases 
[here|https://github.com/alpreu/flink/commit/a4c1b4b5dd8049e4bec6b2cc6f0b8ddfd500d065]
 that could maybe provoke the exception to happen, yet still always pass. We 
believe the issue to be on RedPandas side

> KafkaSink might violate order of sequence numbers and risk exactly-once 
> processing
> --
>
> Key: FLINK-26254
> URL: https://issues.apache.org/jira/browse/FLINK-26254
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Fabian Paul
>Assignee: Alexander Preuss
>Priority: Critical
>
> When running the KafkaSink in exactly-once mode with a very low checkpoint 
> interval users are seeing `OutOfOrderSequenceException`.
> It could be caused by the fact that the connector has a pool of 
> KafkaProducers and the sequence numbers are not shared/reset if a new 
> KafkaProducer tries to write to a partition while the previous KafkaProducer 
> is still occupied for committing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26254) KafkaSink might violate order of sequence numbers and risk exactly-once processing

2022-02-21 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495589#comment-17495589
 ] 

Alexander Preuss commented on FLINK-26254:
--

[~knaufk] how was your setup to reproduce this? I'm trying to extend the 
existing ITCase to create some more checkpoints but I do not see it happening 
there, neither with a high nor low checkpoint interval

> KafkaSink might violate order of sequence numbers and risk exactly-once 
> processing
> --
>
> Key: FLINK-26254
> URL: https://issues.apache.org/jira/browse/FLINK-26254
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0, 1.14.3
>Reporter: Fabian Paul
>Priority: Critical
>
> When running the KafkaSink in exactly-once mode with a very low checkpoint 
> interval users are seeing `OutOfOrderSequenceException`.
> It could be caused by the fact that the connector has a pool of 
> KafkaProducers and the sequence numbers are not shared/reset if a new 
> KafkaProducer tries to write to a partition while the previous KafkaProducer 
> is still occupied for committing.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26281) Test Elasticsearch connector End2End

2022-02-21 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-26281:
-
Description: 
Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323

Documentation for [datastream 
api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/elasticsearch/]

Documentation for [table 
api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/]

As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
introduces the new connector based on the Sink interface we should test it 
behaves correctly and as the user expects.

 

Some suggestions what to test:
 * Test delivery guarantees (none, at-least-once) (exactly-once should not run)
 * Write a simple job that is inserting/upserting data into Elasticsearch
 * Write a simple job that is inserting/upserting data into Elasticsearch and 
use a non-default parallelism
 * Write a simple job in both datastream api and table api
 * Test restarting jobs and scaling up/down
 * Test failure of a simple job that is inserting data with exactly-once 
delivery guarantee by terminating and restarting Elasticsearch
 * Test against Elasticsearch 6.X and 7.X with the respective connectors

  was:
Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323


As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
introduces the new connector based on the Sink interface we should test it 
behaves correctly and as the user expects.

 

Some suggestions what to test:
 * Test delivery guarantees (none, at-least-once) (exactly-once should not run)
 * Write a simple job that is inserting/upserting data into Elasticsearch
 * Write a simple job that is inserting/upserting data into Elasticsearch and 
use a non-default parallelism
 * Write a simple job in both datastream api and table api
 * Test restarting jobs and scaling up/down
 * Test against Elasticsearch 6.X and 7.X with the respective connectors

When testing please also consider the following things:
 - Is the documentation easy to understand
 - Are the error messages, log messages, APIs etc. easy to understand
 - Is the feature working as expected under normal conditions
 - Is the feature working / failing as expected with invalid input, induced 
errors etc.
 -  

If you find a problem during testing, please file a ticket and link it in this 
testing ticket.
During the testing, and once you are finished, please write a short summary of 
all things you have tested.


> Test Elasticsearch connector End2End
> 
>
> Key: FLINK-26281
> URL: https://issues.apache.org/jira/browse/FLINK-26281
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Priority: Major
>  Labels: release-testing
>
> Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323
> Documentation for [datastream 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/elasticsearch/]
> Documentation for [table 
> api|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/elasticsearch/]
> As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
> introduces the new connector based on the Sink interface we should test it 
> behaves correctly and as the user expects.
>  
> Some suggestions what to test:
>  * Test delivery guarantees (none, at-least-once) (exactly-once should not 
> run)
>  * Write a simple job that is inserting/upserting data into Elasticsearch
>  * Write a simple job that is inserting/upserting data into Elasticsearch and 
> use a non-default parallelism
>  * Write a simple job in both datastream api and table api
>  * Test restarting jobs and scaling up/down
>  * Test failure of a simple job that is inserting data with exactly-once 
> delivery guarantee by terminating and restarting Elasticsearch
>  * Test against Elasticsearch 6.X and 7.X with the respective connectors



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26272) Elasticsearch7SinkITCase.testWriteJsonToElasticsearch fails with socket timeout

2022-02-21 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495561#comment-17495561
 ] 

Alexander Preuss commented on FLINK-26272:
--

I have seen this happen on other failures as well. There is nothing else in the 
logs from Elastic that is suspicious really. Seeing as were already at half a 
minute I don't see why some simple indexing actions should take this much time. 
I saw that some people running into this were able to solve it by increasing 
Elasticsearch's memory but this is really a wild guess. Should we try 
increasing the container memory and see if this fixes things? [~mapohl] 
[~chesnay] 

> Elasticsearch7SinkITCase.testWriteJsonToElasticsearch fails with socket 
> timeout
> ---
>
> Key: FLINK-26272
> URL: https://issues.apache.org/jira/browse/FLINK-26272
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> We observed a test failure in [this 
> build|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31883=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=12917]
>  with {{Elasticsearch7SinkITCase.testWriteJsonToElasticsearch}} failing due 
> to a {{SocketTimeoutException}}:
> {code}
> Feb 18 18:04:20 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 80.248 s <<< FAILURE! - in 
> org.apache.flink.connector.elasticsearch.sink.Elasticsearch7SinkITCase
> Feb 18 18:04:20 [ERROR] 
> org.apache.flink.connector.elasticsearch.sink.Elasticsearch7SinkITCase.testWriteJsonToElasticsearch(BiFunction)[1]
>   Time elapsed: 31.525 s  <<< ERROR!
> Feb 18 18:04:20 org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
> Feb 18 18:04:20   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> Feb 18 18:04:20   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> Feb 18 18:04:20   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:259)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> Feb 18 18:04:20   at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
> Feb 18 18:04:20   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> Feb 18 18:04:20   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> Feb 18 18:04:20   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> Feb 18 18:04:20   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> Feb 18 18:04:20   at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> Feb 18 18:04:20   at akka.dispatch.OnComplete.internal(Future.scala:300)
> Feb 18 18:04:20   at akka.dispatch.OnComplete.internal(Future.scala:297)
> Feb 18 18:04:20   at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> Feb 18 18:04:20   at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> Feb 18 18:04:20   at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> Feb 18 18:04:20   at 
> 

[jira] [Created] (FLINK-26281) Test Elasticsearch connector End2End

2022-02-21 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-26281:


 Summary: Test Elasticsearch connector End2End
 Key: FLINK-26281
 URL: https://issues.apache.org/jira/browse/FLINK-26281
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / ElasticSearch
Affects Versions: 1.15.0
Reporter: Alexander Preuss


Feature introduced in https://issues.apache.org/jira/browse/FLINK-24323


As 1.15 deprecated the SinkFunction-based Elasticsearch connector and 
introduces the new connector based on the Sink interface we should test it 
behaves correctly and as the user expects.

 

Some suggestions what to test:
 * Test delivery guarantees (none, at-least-once) (exactly-once should not run)
 * Write a simple job that is inserting/upserting data into Elasticsearch
 * Write a simple job that is inserting/upserting data into Elasticsearch and 
use a non-default parallelism
 * Write a simple job in both datastream api and table api
 * Test restarting jobs and scaling up/down
 * Test against Elasticsearch 6.X and 7.X with the respective connectors

When testing please also consider the following things:
 - Is the documentation easy to understand
 - Are the error messages, log messages, APIs etc. easy to understand
 - Is the feature working as expected under normal conditions
 - Is the feature working / failing as expected with invalid input, induced 
errors etc.
 -  

If you find a problem during testing, please file a ticket and link it in this 
testing ticket.
During the testing, and once you are finished, please write a short summary of 
all things you have tested.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26238) PulsarSinkITCase.writeRecordsToPulsar failed on azure

2022-02-21 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495396#comment-17495396
 ] 

Alexander Preuss commented on FLINK-26238:
--

Another case: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31785=logs=e60d5d3b-2e2f-5bb7-484a-16135ea73209=9363ed9e-7ac2-5835-c98d-4fdb89b19329=27252

> PulsarSinkITCase.writeRecordsToPulsar failed on azure
> -
>
> Key: FLINK-26238
> URL: https://issues.apache.org/jira/browse/FLINK-26238
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> Feb 17 12:19:44 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, 
> Time elapsed: 342.177 s <<< FAILURE! - in 
> org.apache.flink.connector.pulsar.sink.PulsarSinkITCase
> Feb 17 12:19:44 [ERROR] 
> org.apache.flink.connector.pulsar.sink.PulsarSinkITCase.writeRecordsToPulsar(DeliveryGuarantee)[3]
>   Time elapsed: 302.4 s  <<< FAILURE!
> Feb 17 12:19:44 java.lang.AssertionError: 
> Feb 17 12:19:44 
> Feb 17 12:19:44 Actual and expected should have same size but actual size is:
> Feb 17 12:19:44   0
> Feb 17 12:19:44 while expected size is:
> Feb 17 12:19:44   179
> Feb 17 12:19:44 Actual was:
> Feb 17 12:19:44   []
> Feb 17 12:19:44 Expected was:
> Feb 17 12:19:44   ["NONE-CzYEIFDd-0-eELBphcHiu",
> Feb 17 12:19:44 "NONE-CzYEIFDd-1-odr3NpH6pg",
> Feb 17 12:19:44 "NONE-CzYEIFDd-2-HfIphNFXoM",
> Feb 17 12:19:44 "NONE-CzYEIFDd-3-iaZ9v2HCnw",
> Feb 17 12:19:44 "NONE-CzYEIFDd-4-6KkXK34GZl",
> Feb 17 12:19:44 "NONE-CzYEIFDd-5-jK9UxXSQcX",
> Feb 17 12:19:44 "NONE-CzYEIFDd-6-HipVPVNqZA",
> Feb 17 12:19:44 "NONE-CzYEIFDd-7-lT4lVH3CzX",
> Feb 17 12:19:44 "NONE-CzYEIFDd-8-4jShEBuQaS",
> Feb 17 12:19:44 "NONE-CzYEIFDd-9-fInSd97msu",
> Feb 17 12:19:44 "NONE-CzYEIFDd-10-dGBm5e92os",
> Feb 17 12:19:44 "NONE-CzYEIFDd-11-GkINb6Dipx",
> Feb 17 12:19:44 "NONE-CzYEIFDd-12-M7Q8atHhNQ",
> Feb 17 12:19:44 "NONE-CzYEIFDd-13-EG2FpyziCL",
> Feb 17 12:19:44 "NONE-CzYEIFDd-14-4HwGJSOkTk",
> Feb 17 12:19:44 "NONE-CzYEIFDd-15-UC0IwwKN0O",
> Feb 17 12:19:44 "NONE-CzYEIFDd-16-D9FOV8hKBq",
> Feb 17 12:19:44 "NONE-CzYEIFDd-17-J2Zb6pNmOO",
> Feb 17 12:19:44 "NONE-CzYEIFDd-18-abo3YgkYKP",
> Feb 17 12:19:44 "NONE-CzYEIFDd-19-4Q5GbBRSc6",
> Feb 17 12:19:44 "NONE-CzYEIFDd-20-WxSP9oExJP",
> Feb 17 12:19:44 "NONE-CzYEIFDd-21-0wiqq21CY1",
> Feb 17 12:19:44 "NONE-CzYEIFDd-22-3iJQiFjgQu",
> Feb 17 12:19:44 "NONE-CzYEIFDd-23-78je74YwU6",
> Feb 17 12:19:44 "NONE-CzYEIFDd-24-tEkEaF9IuD",
> Feb 17 12:19:44 "NONE-CzYEIFDd-25-vDi5h44tjJ",
> Feb 17 12:19:44 "NONE-CzYEIFDd-26-GzIh4FLlvP",
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31739=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=27095



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26220) KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails on Azure Pipelines

2022-02-17 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-26220:


 Summary: KafkaSourceITCase$KafkaSpecificTests.testTimestamp fails 
on Azure Pipelines
 Key: FLINK-26220
 URL: https://issues.apache.org/jira/browse/FLINK-26220
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Alexander Preuss


{code:java}
Feb 16 18:10:37 [ERROR] Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time 
elapsed: 60.963 s <<< FAILURE! - in 
org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests
Feb 16 18:10:37 [ERROR] 
org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(boolean)[1]
  Time elapsed: 11.21 s  <<< FAILURE!
Feb 16 18:10:37 java.lang.AssertionError: Create test topic : 
testTimestamp-3028462271882246016 failed, The topic metadata failed to 
propagate to Kafka broker.
Feb 16 18:10:37 at org.junit.Assert.fail(Assert.java:89)
Feb 16 18:10:37 at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:223)
Feb 16 18:10:37 at 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:98)
Feb 16 18:10:37 at 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:216)
Feb 16 18:10:37 at 
org.apache.flink.connector.kafka.source.KafkaSourceITCase$KafkaSpecificTests.testTimestamp(KafkaSourceITCase.java:108)
Feb 16 18:10:37 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
Feb 16 18:10:37 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Feb 16 18:10:37 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Feb 16 18:10:37 at java.lang.reflect.Method.invoke(Method.java:498)
 {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=31686=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c=35870



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26111) ElasticsearchSinkITCase.testElasticsearchSink failed on azure

2022-02-17 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17493905#comment-17493905
 ] 

Alexander Preuss commented on FLINK-26111:
--

Enabled logging in https://issues.apache.org/jira/browse/FLINK-25698 which 
should help with this

> ElasticsearchSinkITCase.testElasticsearchSink failed on azure
> -
>
> Key: FLINK-26111
> URL: https://issues.apache.org/jira/browse/FLINK-26111
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.14.3
>Reporter: Yun Gao
>Assignee: Alexander Preuss
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 2022-02-12T02:31:35.0591708Z Feb 12 02:31:35 [ERROR] testElasticsearchSink  
> Time elapsed: 32.062 s  <<< ERROR!
> 2022-02-12T02:31:35.0592911Z Feb 12 02:31:35 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-02-12T02:31:35.0594116Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-02-12T02:31:35.0595330Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2022-02-12T02:31:35.0597985Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-02-12T02:31:35.0598951Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-02-12T02:31:35.0599766Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-02-12T02:31:35.0600573Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-02-12T02:31:35.0601580Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
> 2022-02-12T02:31:35.0670936Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-02-12T02:31:35.0672538Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-02-12T02:31:35.0673610Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-02-12T02:31:35.0674648Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-02-12T02:31:35.0675637Z Feb 12 02:31:35  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
> 2022-02-12T02:31:35.0676705Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-02-12T02:31:35.0677921Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-02-12T02:31:35.0679204Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-02-12T02:31:35.0680428Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-02-12T02:31:35.0681685Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-02-12T02:31:35.0682868Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-02-12T02:31:35.0683909Z Feb 12 02:31:35  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-02-12T02:31:35.0684997Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-02-12T02:31:35.0686539Z Feb 12 02:31:35  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-02-12T02:31:35.0687383Z Feb 12 02:31:35  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-02-12T02:31:35.0688214Z Feb 12 02:31:35  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-02-12T02:31:35.0689231Z Feb 12 02:31:35  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-02-12T02:31:35.0690125Z Feb 12 02:31:35  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-02-12T02:31:35.0691199Z Feb 12 02:31:35  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-02-12T02:31:35.0692578Z Feb 12 02:31:35  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-02-12T02:31:35.0693634Z Feb 12 02:31:35  at 
> scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
> 

[jira] [Assigned] (FLINK-26185) E2E Elasticsearch Tests should use the new Sink interface

2022-02-17 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-26185:


Assignee: Alexander Preuss

> E2E Elasticsearch Tests should use the new Sink interface
> -
>
> Key: FLINK-26185
> URL: https://issues.apache.org/jira/browse/FLINK-26185
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Major
>  Labels: pull-request-available
>
> Currently the E2E tests for Elasticsearch (test_streaming_elasticsearch.sh) 
> is testing the old Sink interface implementation. As we are now moving to the 
> new Sink interface, we should update the tests to test the new implementation



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26195) Kafka connector tests are mixing JUnit4 and JUnit5

2022-02-16 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-26195:


 Summary: Kafka connector tests are mixing JUnit4 and JUnit5
 Key: FLINK-26195
 URL: https://issues.apache.org/jira/browse/FLINK-26195
 Project: Flink
  Issue Type: Technical Debt
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Alexander Preuss


In the tests for the Kafka connector there are multiple occurrences of mixing 
JUnit 4 and JUnit 5. This prevents proper logging from e.g. 
TestLoggerExtension. There are also tests that run on JUnit 4 but use 
Assertions or Annotations from JUnit 5



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26185) E2E Elasticsearch Tests should use the new Sink interface

2022-02-16 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-26185:


 Summary: E2E Elasticsearch Tests should use the new Sink interface
 Key: FLINK-26185
 URL: https://issues.apache.org/jira/browse/FLINK-26185
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch, Tests
Affects Versions: 1.15.0
Reporter: Alexander Preuss


Currently the E2E tests for Elasticsearch (test_streaming_elasticsearch.sh) is 
testing the old Sink interface implementation. As we are now moving to the new 
Sink interface, we should update the tests to test the new implementation



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25980) remove unnecessary condition in IntervalJoinOperator

2022-02-14 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-25980:
-
Affects Version/s: (was: 1.11.6)

> remove unnecessary condition in IntervalJoinOperator
> 
>
> Key: FLINK-25980
> URL: https://issues.apache.org/jira/browse/FLINK-25980
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.7, 1.13.5, 1.14.2, 1.14.3
>Reporter: hongshu
>Assignee: hongshu
>Priority: Major
>  Labels: pull-request-available, pull_request_available
>
> Condition 'currentWatermark != Long.MIN_VALUE' covered by subsequent 
> condition 'timestamp < currentWatermark' 
> org.apache.flink.streaming.api.operators.co.IntervalJoinOperator#isLate
> {code:java}
> private boolean isLate(long timestamp) {
> long currentWatermark = internalTimerService.currentWatermark();
> return currentWatermark != Long.MIN_VALUE && timestamp < currentWatermark;
> } {code}
> if currentWatermark == Long.MIN_VALUE, timestamp < currentWatermark it's also 
> return false, so condition currentWatermark != Long.MIN_VALUE is unnecessary
> We can use the following code directly
> {code:java}
> private boolean isLate(long timestamp) {
> long currentWatermark = internalTimerService.currentWatermark();
> return timestamp < currentWatermark;
> } {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25691) ElasticsearchSinkITCase.testElasticsearchSink fails on AZP

2022-02-11 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17490956#comment-17490956
 ] 

Alexander Preuss commented on FLINK-25691:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=29839=logs=c91190b6-40ae-57b2-5999-31b869b0a7c1=41463ccd-0694-5d4d-220d-8f771e7d098b=12662]
 might be the same problem, unfortunately theres is no Elastic log output in 
the 1.14 logs

> ElasticsearchSinkITCase.testElasticsearchSink fails on AZP
> --
>
> Key: FLINK-25691
> URL: https://issues.apache.org/jira/browse/FLINK-25691
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: stale-critical, test-stability
>
> The test {{ElasticsearchSinkITCase.testElasticsearchSink}} fails on AZP with
> {code}
> 2022-01-18T08:10:11.9777311Z Jan 18 08:10:11 [ERROR] 
> org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSinkITCase.testElasticsearchSink
>   Time elapsed: 31.816 s  <<< ERROR!
> 2022-01-18T08:10:11.9778438Z Jan 18 08:10:11 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-01-18T08:10:11.9779184Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-01-18T08:10:11.9779993Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2022-01-18T08:10:11.9780892Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-01-18T08:10:11.9781726Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-01-18T08:10:11.9782380Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9783097Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.9783866Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
> 2022-01-18T08:10:11.9784615Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-18T08:10:11.9791362Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-18T08:10:11.9792139Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9793011Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.9793620Z Jan 18 08:10:11  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
> 2022-01-18T08:10:11.9794267Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-01-18T08:10:11.9795177Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-01-18T08:10:11.9796451Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-01-18T08:10:11.9797325Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-18T08:10:11.9798108Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-18T08:10:11.9798749Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9799364Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.970Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-01-18T08:10:11.9800561Z Jan 18 08:10:11  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-01-18T08:10:11.9801061Z Jan 18 08:10:11  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-01-18T08:10:11.9801661Z Jan 18 08:10:11  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-01-18T08:10:11.9802186Z Jan 18 08:10:11  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-01-18T08:10:11.9802713Z Jan 18 08:10:11  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-01-18T08:10:11.9803348Z Jan 18 08:10:11  at 
> 

[jira] [Assigned] (FLINK-25691) ElasticsearchSinkITCase.testElasticsearchSink fails on AZP

2022-01-20 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-25691:


Assignee: Alexander Preuss

> ElasticsearchSinkITCase.testElasticsearchSink fails on AZP
> --
>
> Key: FLINK-25691
> URL: https://issues.apache.org/jira/browse/FLINK-25691
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: test-stability
>
> The test {{ElasticsearchSinkITCase.testElasticsearchSink}} fails on AZP with
> {code}
> 2022-01-18T08:10:11.9777311Z Jan 18 08:10:11 [ERROR] 
> org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSinkITCase.testElasticsearchSink
>   Time elapsed: 31.816 s  <<< ERROR!
> 2022-01-18T08:10:11.9778438Z Jan 18 08:10:11 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-01-18T08:10:11.9779184Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-01-18T08:10:11.9779993Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2022-01-18T08:10:11.9780892Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-01-18T08:10:11.9781726Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-01-18T08:10:11.9782380Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9783097Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.9783866Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
> 2022-01-18T08:10:11.9784615Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-18T08:10:11.9791362Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-18T08:10:11.9792139Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9793011Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.9793620Z Jan 18 08:10:11  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
> 2022-01-18T08:10:11.9794267Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-01-18T08:10:11.9795177Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-01-18T08:10:11.9796451Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-01-18T08:10:11.9797325Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-18T08:10:11.9798108Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-18T08:10:11.9798749Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9799364Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.970Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-01-18T08:10:11.9800561Z Jan 18 08:10:11  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-01-18T08:10:11.9801061Z Jan 18 08:10:11  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-01-18T08:10:11.9801661Z Jan 18 08:10:11  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-01-18T08:10:11.9802186Z Jan 18 08:10:11  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-01-18T08:10:11.9802713Z Jan 18 08:10:11  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-01-18T08:10:11.9803348Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-01-18T08:10:11.9804008Z Jan 18 08:10:11  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-01-18T08:10:11.9804600Z Jan 18 08:10:11  at 
> 

[jira] [Assigned] (FLINK-25691) ElasticsearchSinkITCase.testElasticsearchSink fails on AZP

2022-01-20 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-25691:


Assignee: (was: Alexander Preuss)

> ElasticsearchSinkITCase.testElasticsearchSink fails on AZP
> --
>
> Key: FLINK-25691
> URL: https://issues.apache.org/jira/browse/FLINK-25691
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> The test {{ElasticsearchSinkITCase.testElasticsearchSink}} fails on AZP with
> {code}
> 2022-01-18T08:10:11.9777311Z Jan 18 08:10:11 [ERROR] 
> org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSinkITCase.testElasticsearchSink
>   Time elapsed: 31.816 s  <<< ERROR!
> 2022-01-18T08:10:11.9778438Z Jan 18 08:10:11 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2022-01-18T08:10:11.9779184Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> 2022-01-18T08:10:11.9779993Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> 2022-01-18T08:10:11.9780892Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2022-01-18T08:10:11.9781726Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2022-01-18T08:10:11.9782380Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9783097Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.9783866Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$1(AkkaInvocationHandler.java:258)
> 2022-01-18T08:10:11.9784615Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-18T08:10:11.9791362Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-18T08:10:11.9792139Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9793011Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.9793620Z Jan 18 08:10:11  at 
> org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1389)
> 2022-01-18T08:10:11.9794267Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
> 2022-01-18T08:10:11.9795177Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
> 2022-01-18T08:10:11.9796451Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
> 2022-01-18T08:10:11.9797325Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2022-01-18T08:10:11.9798108Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2022-01-18T08:10:11.9798749Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2022-01-18T08:10:11.9799364Z Jan 18 08:10:11  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2022-01-18T08:10:11.970Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$1.onComplete(AkkaFutureUtils.java:47)
> 2022-01-18T08:10:11.9800561Z Jan 18 08:10:11  at 
> akka.dispatch.OnComplete.internal(Future.scala:300)
> 2022-01-18T08:10:11.9801061Z Jan 18 08:10:11  at 
> akka.dispatch.OnComplete.internal(Future.scala:297)
> 2022-01-18T08:10:11.9801661Z Jan 18 08:10:11  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:224)
> 2022-01-18T08:10:11.9802186Z Jan 18 08:10:11  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:221)
> 2022-01-18T08:10:11.9802713Z Jan 18 08:10:11  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
> 2022-01-18T08:10:11.9803348Z Jan 18 08:10:11  at 
> org.apache.flink.runtime.concurrent.akka.AkkaFutureUtils$DirectExecutionContext.execute(AkkaFutureUtils.java:65)
> 2022-01-18T08:10:11.9804008Z Jan 18 08:10:11  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
> 2022-01-18T08:10:11.9804600Z Jan 18 08:10:11  at 
> 

[jira] [Assigned] (FLINK-25531) The test testRetryCommittableOnRetriableError takes one hour before completing succesfully

2022-01-10 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-25531:


Assignee: Alexander Preuss  (was: Fabian Paul)

> The test testRetryCommittableOnRetriableError takes one hour before 
> completing succesfully
> --
>
> Key: FLINK-25531
> URL: https://issues.apache.org/jira/browse/FLINK-25531
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.0
>Reporter: Martijn Visser
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> When working on https://issues.apache.org/jira/browse/FLINK-25504 I noticed 
> that the {{test_ci kafka_gelly}} run took more then 1:30 hours. 
> When looking at the logs for this PR 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28866=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32
>  I noticed that 
> {{org.apache.flink.connector.kafka.sink.KafkaCommitterTest.testRetryCommittableOnRetriableError}}
>  is running for an hour:
> {code:java}
> 13:22:31,145 [kafka-producer-network-thread | producer-transactionalId] WARN  
> org.apache.kafka.clients.NetworkClient   [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] 
> Connection to node -1 (localhost/127.0.0.1:1) could not be established. 
> Broker may not be available.
> 13:22:31,145 [kafka-producer-network-thread | producer-transactionalId] WARN  
> org.apache.kafka.clients.NetworkClient   [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] Bootstrap 
> broker localhost:1 (id: -1 rack: null) disconnected
> 13:22:31,347 [kafka-producer-network-thread | producer-transactionalId] WARN  
> org.apache.kafka.clients.NetworkClient   [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] 
> Connection to node -1 (localhost/127.0.0.1:1) could not be established. 
> Broker may not be available.
> ...
> 14:22:29,472 [kafka-producer-network-thread | producer-transactionalId] WARN  
> org.apache.kafka.clients.NetworkClient   [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] Bootstrap 
> broker localhost:1 (id: -1 rack: null) disconnected
> 14:22:30,324 [kafka-producer-network-thread | producer-transactionalId] WARN  
> org.apache.kafka.clients.NetworkClient   [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] 
> Connection to node -1 (localhost/127.0.0.1:1) could not be established. 
> Broker may not be available.
> 14:22:30,324 [kafka-producer-network-thread | producer-transactionalId] WARN  
> org.apache.kafka.clients.NetworkClient   [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] Bootstrap 
> broker localhost:1 (id: -1 rack: null) disconnected
> 14:22:31,144 [main] INFO  
> org.apache.kafka.clients.producer.KafkaProducer  [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] 
> Proceeding to force close the producer since pending requests could not be 
> completed within timeout 360 ms.
> 14:22:31,145 [kafka-producer-network-thread | producer-transactionalId] INFO  
> org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] 
> Transiting to fatal error state due to 
> org.apache.kafka.common.KafkaException: The producer closed forcefully
> 14:22:31,145 [kafka-producer-network-thread | producer-transactionalId] INFO  
> org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer 
> clientId=producer-transactionalId, transactionalId=transactionalId] 
> Transiting to fatal error state due to 
> org.apache.kafka.common.KafkaException: The producer closed forcefully
> 14:22:31,148 [main] INFO  
> org.apache.flink.connectors.test.common.junit.extensions.TestLoggerExtension 
> [] - 
> 
> Test 
> org.apache.flink.connector.kafka.sink.KafkaCommitterTest.testRetryCommittableOnRetriableError
>  successfully run.
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25589) Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17471969#comment-17471969
 ] 

Alexander Preuss commented on FLINK-25589:
--

Thank you [~biyuhao] , I assigned you the ticket

> Update Chinese version of Elasticsearch connector docs
> --
>
> Key: FLINK-25589
> URL: https://issues.apache.org/jira/browse/FLINK-25589
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Connectors / ElasticSearch, 
> Documentation
>Reporter: Alexander Preuss
>Priority: Major
>
> In FLINK-24326 we updated the documentation with the new Elasticsearch sink 
> interface. The Chinese version still has to be updated as well.
> The affected pages are: 
> docs/content/docs/connectors/datastream/elasticsearch.md
> docs/content/docs/connectors/table/elasticsearch.md
> English doc PR for reference:
> https://github.com/apache/flink/pull/17930



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25589) Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-25589:


Assignee: Yuhao Bi

> Update Chinese version of Elasticsearch connector docs
> --
>
> Key: FLINK-25589
> URL: https://issues.apache.org/jira/browse/FLINK-25589
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Connectors / ElasticSearch, 
> Documentation
>Reporter: Alexander Preuss
>Assignee: Yuhao Bi
>Priority: Major
>
> In FLINK-24326 we updated the documentation with the new Elasticsearch sink 
> interface. The Chinese version still has to be updated as well.
> The affected pages are: 
> docs/content/docs/connectors/datastream/elasticsearch.md
> docs/content/docs/connectors/table/elasticsearch.md
> English doc PR for reference:
> https://github.com/apache/flink/pull/17930



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25589) Update Chinese version of Elasticsearch connector docs

2022-01-10 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-25589:


 Summary: Update Chinese version of Elasticsearch connector docs
 Key: FLINK-25589
 URL: https://issues.apache.org/jira/browse/FLINK-25589
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation, Connectors / ElasticSearch, 
Documentation
Reporter: Alexander Preuss


In FLINK-24326 we updated the documentation with the new Elasticsearch sink 
interface. The Chinese version still has to be updated as well.

The affected pages are: 
docs/content/docs/connectors/datastream/elasticsearch.md
docs/content/docs/connectors/table/elasticsearch.md

English doc PR for reference:
https://github.com/apache/flink/pull/17930



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-16713) Support source mode of elasticsearch connector

2022-01-07 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss closed FLINK-16713.

Resolution: Duplicate

> Support source mode of elasticsearch connector
> --
>
> Key: FLINK-16713
> URL: https://issues.apache.org/jira/browse/FLINK-16713
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.10.0
>Reporter: jackray wang
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> [https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/connect.html#elasticsearch-connector]
> For append-only queries, the connector can also operate in [append 
> mode|https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/connect.html#update-modes]
>  for exchanging only INSERT messages with the external system. If no key is 
> defined by the query, a key is automatically generated by Elasticsearch.
> I want to know ,why the connector of flink with ES just support sink but  
> doesn't support source .Which version could add this feature to ?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25568) Add Elasticsearch 7 Source Connector

2022-01-07 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-25568:


 Summary: Add Elasticsearch 7 Source Connector
 Key: FLINK-25568
 URL: https://issues.apache.org/jira/browse/FLINK-25568
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / ElasticSearch
Reporter: Alexander Preuss
Assignee: Alexander Preuss


We want to support not only Sink but also Source for Elasticsearch. As a first 
step we want to add a ScanTableSource for Elasticsearch 7.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-25507) ElasticsearchWriterITCase#testWriteOnBulkFlush test failed

2022-01-04 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss resolved FLINK-25507.
--
Resolution: Not A Problem

Test requires docker to be running

> ElasticsearchWriterITCase#testWriteOnBulkFlush test failed 
> ---
>
> Key: FLINK-25507
> URL: https://issues.apache.org/jira/browse/FLINK-25507
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: ranqiqiang
>Priority: Major
>
> When  I run ElasticsearchWriterITCase#testWriteOnBulkFlush, got this.
> Is this a problem?
>  
>  
> {code:java}
> java.lang.ExceptionInInitializerError
>     at sun.misc.Unsafe.ensureClassInitialized(Native Method)
>     at 
> sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43)
>     at 
> sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156)
>     at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1088)
>     at java.lang.reflect.Field.getFieldAccessor(Field.java:1069)
>     at java.lang.reflect.Field.get(Field.java:393)
>     at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.getContainerInstance(TestcontainersExtension.java:217)
>     at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$findSharedContainers$10(TestcontainersExtension.java:178)
>     at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
>     at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
>     at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
>     at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
>     at 
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
>     at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
>     at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
>     at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.findSharedContainers(TestcontainersExtension.java:179)
>     at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.beforeAll(TestcontainersExtension.java:57)
>     at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$10(ClassBasedTestDescriptor.java:381)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:381)
>     at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:205)
>     at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:80)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>     at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
>     at java.util.ArrayList.forEach(ArrayList.java:1259)
>     at 
> org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
>     at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
>     at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>     at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
>     at 
> 

[jira] [Commented] (FLINK-25223) ElasticsearchWriterITCase fails on AZP

2021-12-17 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17461317#comment-17461317
 ] 

Alexander Preuss commented on FLINK-25223:
--

[~chesnay] I believe it only started happening on master because we increased 
the docker image version there, so it might not be an issue on older releases. 
Can anyone provide failing builds from older releases? The failing one 
mentioned by [~gaoyunhaii]  looks to me like it is not related to Elasticsearch 
but rather docker in general

> ElasticsearchWriterITCase fails on AZP
> --
>
> Key: FLINK-25223
> URL: https://issues.apache.org/jira/browse/FLINK-25223
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Assignee: Alexander Preuss
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> The {{ElasticsearchWriterITCase}} fails on AZP because
> {code}
> 2021-12-08T13:56:59.5449851Z Dec 08 13:56:59 [ERROR] 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase  Time 
> elapsed: 171.046 s  <<< ERROR!
> 2021-12-08T13:56:59.5450680Z Dec 08 13:56:59 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 2021-12-08T13:56:59.5451652Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
> 2021-12-08T13:56:59.5452677Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
> 2021-12-08T13:56:59.5453637Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.start(TestcontainersExtension.java:242)
> 2021-12-08T13:56:59.5454757Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.access$200(TestcontainersExtension.java:229)
> 2021-12-08T13:56:59.5455946Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$null$1(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5457322Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.lambda$getOrComputeIfAbsent$4(ExtensionValuesStore.java:86)
> 2021-12-08T13:56:59.5458571Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.computeValue(ExtensionValuesStore.java:223)
> 2021-12-08T13:56:59.5459771Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.get(ExtensionValuesStore.java:211)
> 2021-12-08T13:56:59.5460693Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.evaluate(ExtensionValuesStore.java:191)
> 2021-12-08T13:56:59.5461437Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.access$100(ExtensionValuesStore.java:171)
> 2021-12-08T13:56:59.5462198Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.getOrComputeIfAbsent(ExtensionValuesStore.java:89)
> 2021-12-08T13:56:59.5467999Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.NamespaceAwareStore.getOrComputeIfAbsent(NamespaceAwareStore.java:53)
> 2021-12-08T13:56:59.5468791Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$beforeAll$2(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5469436Z Dec 08 13:56:59  at 
> java.util.ArrayList.forEach(ArrayList.java:1259)
> 2021-12-08T13:56:59.5470058Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.beforeAll(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5470846Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$10(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5471641Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2021-12-08T13:56:59.5472403Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5473190Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:205)
> 2021-12-08T13:56:59.5474001Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:80)
> 2021-12-08T13:56:59.5474759Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
> 2021-12-08T13:56:59.5475833Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 

[jira] [Updated] (FLINK-25326) KafkaUtil.createKafkaContainer log levels are not set correctly

2021-12-15 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss updated FLINK-25326:
-
Labels: pull-requests-available  (was: pull-request-available)

> KafkaUtil.createKafkaContainer log levels are not set correctly
> ---
>
> Key: FLINK-25326
> URL: https://issues.apache.org/jira/browse/FLINK-25326
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Minor
>  Labels: pull-requests-available
>
> The internal kafka log levels set in KafkaUtils.createKafkaContainer method 
> are wrong due to the order of the log hierarchy. If the test logger is set to 
> e.g. 'DEBUG' it means that `logger.isErrorEnabled()` already evaluated to 
> true and therefore the log level gets set to ERROR instead.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25326) KafkaUtil.createKafkaContainer log levels are not set correctly

2021-12-15 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss reassigned FLINK-25326:


Assignee: Alexander Preuss

> KafkaUtil.createKafkaContainer log levels are not set correctly
> ---
>
> Key: FLINK-25326
> URL: https://issues.apache.org/jira/browse/FLINK-25326
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Minor
>
> The internal kafka log levels set in KafkaUtils.createKafkaContainer method 
> are wrong due to the order of the log hierarchy. If the test logger is set to 
> e.g. 'DEBUG' it means that `logger.isErrorEnabled()` already evaluated to 
> true and therefore the log level gets set to ERROR instead.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25326) KafkaUtil.createKafkaContainer log levels are not set correctly

2021-12-15 Thread Alexander Preuss (Jira)
Alexander Preuss created FLINK-25326:


 Summary: KafkaUtil.createKafkaContainer log levels are not set 
correctly
 Key: FLINK-25326
 URL: https://issues.apache.org/jira/browse/FLINK-25326
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.15.0
Reporter: Alexander Preuss


The internal kafka log levels set in KafkaUtils.createKafkaContainer method are 
wrong due to the order of the log hierarchy. If the test logger is set to e.g. 
'DEBUG' it means that `logger.isErrorEnabled()` already evaluated to true and 
therefore the log level gets set to ERROR instead.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25223) ElasticsearchWriterITCase fails on AZP

2021-12-09 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456305#comment-17456305
 ] 

Alexander Preuss commented on FLINK-25223:
--

[~trohrmann]  PR available

> ElasticsearchWriterITCase fails on AZP
> --
>
> Key: FLINK-25223
> URL: https://issues.apache.org/jira/browse/FLINK-25223
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> The {{ElasticsearchWriterITCase}} fails on AZP because
> {code}
> 2021-12-08T13:56:59.5449851Z Dec 08 13:56:59 [ERROR] 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase  Time 
> elapsed: 171.046 s  <<< ERROR!
> 2021-12-08T13:56:59.5450680Z Dec 08 13:56:59 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 2021-12-08T13:56:59.5451652Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
> 2021-12-08T13:56:59.5452677Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
> 2021-12-08T13:56:59.5453637Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.start(TestcontainersExtension.java:242)
> 2021-12-08T13:56:59.5454757Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.access$200(TestcontainersExtension.java:229)
> 2021-12-08T13:56:59.5455946Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$null$1(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5457322Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.lambda$getOrComputeIfAbsent$4(ExtensionValuesStore.java:86)
> 2021-12-08T13:56:59.5458571Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.computeValue(ExtensionValuesStore.java:223)
> 2021-12-08T13:56:59.5459771Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.get(ExtensionValuesStore.java:211)
> 2021-12-08T13:56:59.5460693Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.evaluate(ExtensionValuesStore.java:191)
> 2021-12-08T13:56:59.5461437Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.access$100(ExtensionValuesStore.java:171)
> 2021-12-08T13:56:59.5462198Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.getOrComputeIfAbsent(ExtensionValuesStore.java:89)
> 2021-12-08T13:56:59.5467999Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.NamespaceAwareStore.getOrComputeIfAbsent(NamespaceAwareStore.java:53)
> 2021-12-08T13:56:59.5468791Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$beforeAll$2(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5469436Z Dec 08 13:56:59  at 
> java.util.ArrayList.forEach(ArrayList.java:1259)
> 2021-12-08T13:56:59.5470058Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.beforeAll(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5470846Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$10(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5471641Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2021-12-08T13:56:59.5472403Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5473190Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:205)
> 2021-12-08T13:56:59.5474001Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:80)
> 2021-12-08T13:56:59.5474759Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
> 2021-12-08T13:56:59.5475833Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2021-12-08T13:56:59.5476739Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> 2021-12-08T13:56:59.5477520Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> 2021-12-08T13:56:59.5478227Z Dec 08 13:56:59  at 
> 

[jira] [Commented] (FLINK-25223) ElasticsearchWriterITCase fails on AZP

2021-12-09 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456283#comment-17456283
 ] 

Alexander Preuss commented on FLINK-25223:
--

[~trohrmann] yeah, I saw that too, will submit a PR for disabling it for the 
time being

> ElasticsearchWriterITCase fails on AZP
> --
>
> Key: FLINK-25223
> URL: https://issues.apache.org/jira/browse/FLINK-25223
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The {{ElasticsearchWriterITCase}} fails on AZP because
> {code}
> 2021-12-08T13:56:59.5449851Z Dec 08 13:56:59 [ERROR] 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase  Time 
> elapsed: 171.046 s  <<< ERROR!
> 2021-12-08T13:56:59.5450680Z Dec 08 13:56:59 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 2021-12-08T13:56:59.5451652Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
> 2021-12-08T13:56:59.5452677Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
> 2021-12-08T13:56:59.5453637Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.start(TestcontainersExtension.java:242)
> 2021-12-08T13:56:59.5454757Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.access$200(TestcontainersExtension.java:229)
> 2021-12-08T13:56:59.5455946Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$null$1(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5457322Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.lambda$getOrComputeIfAbsent$4(ExtensionValuesStore.java:86)
> 2021-12-08T13:56:59.5458571Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.computeValue(ExtensionValuesStore.java:223)
> 2021-12-08T13:56:59.5459771Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.get(ExtensionValuesStore.java:211)
> 2021-12-08T13:56:59.5460693Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.evaluate(ExtensionValuesStore.java:191)
> 2021-12-08T13:56:59.5461437Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.access$100(ExtensionValuesStore.java:171)
> 2021-12-08T13:56:59.5462198Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.getOrComputeIfAbsent(ExtensionValuesStore.java:89)
> 2021-12-08T13:56:59.5467999Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.NamespaceAwareStore.getOrComputeIfAbsent(NamespaceAwareStore.java:53)
> 2021-12-08T13:56:59.5468791Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$beforeAll$2(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5469436Z Dec 08 13:56:59  at 
> java.util.ArrayList.forEach(ArrayList.java:1259)
> 2021-12-08T13:56:59.5470058Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.beforeAll(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5470846Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$10(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5471641Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2021-12-08T13:56:59.5472403Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5473190Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:205)
> 2021-12-08T13:56:59.5474001Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:80)
> 2021-12-08T13:56:59.5474759Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
> 2021-12-08T13:56:59.5475833Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2021-12-08T13:56:59.5476739Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
> 2021-12-08T13:56:59.5477520Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
> 2021-12-08T13:56:59.5478227Z Dec 08 13:56:59  

[jira] [Commented] (FLINK-25223) ElasticsearchWriterITCase fails on AZP

2021-12-09 Thread Alexander Preuss (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456254#comment-17456254
 ] 

Alexander Preuss commented on FLINK-25223:
--

[~chesnay] digging around I found that there is no memory limit applied by 
default to containers started by Testcontainers.
I also checked the logs and saw this:
{code:java}
13:54:59,435 [                main] INFO  
org.testcontainers.containers.wait.strategy.HttpWaitStrategy [] - /sad_ellis: 
Waiting for 120 seconds for URL: http://172.17.0.1:45862/ (where port 45862 
maps to container port 9200)
13:56:59,444 [                main] ERROR  
[docker.elastic.co/elasticsearch/elasticsearch:7.15.2]    [] - Could not start 
container {code}
Could it also be that the ports are not freed up?

> ElasticsearchWriterITCase fails on AZP
> --
>
> Key: FLINK-25223
> URL: https://issues.apache.org/jira/browse/FLINK-25223
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> The {{ElasticsearchWriterITCase}} fails on AZP because
> {code}
> 2021-12-08T13:56:59.5449851Z Dec 08 13:56:59 [ERROR] 
> org.apache.flink.connector.elasticsearch.sink.ElasticsearchWriterITCase  Time 
> elapsed: 171.046 s  <<< ERROR!
> 2021-12-08T13:56:59.5450680Z Dec 08 13:56:59 
> org.testcontainers.containers.ContainerLaunchException: Container startup 
> failed
> 2021-12-08T13:56:59.5451652Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:336)
> 2021-12-08T13:56:59.5452677Z Dec 08 13:56:59  at 
> org.testcontainers.containers.GenericContainer.start(GenericContainer.java:317)
> 2021-12-08T13:56:59.5453637Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.start(TestcontainersExtension.java:242)
> 2021-12-08T13:56:59.5454757Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension$StoreAdapter.access$200(TestcontainersExtension.java:229)
> 2021-12-08T13:56:59.5455946Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$null$1(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5457322Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.lambda$getOrComputeIfAbsent$4(ExtensionValuesStore.java:86)
> 2021-12-08T13:56:59.5458571Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.computeValue(ExtensionValuesStore.java:223)
> 2021-12-08T13:56:59.5459771Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$MemoizingSupplier.get(ExtensionValuesStore.java:211)
> 2021-12-08T13:56:59.5460693Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.evaluate(ExtensionValuesStore.java:191)
> 2021-12-08T13:56:59.5461437Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore$StoredValue.access$100(ExtensionValuesStore.java:171)
> 2021-12-08T13:56:59.5462198Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.ExtensionValuesStore.getOrComputeIfAbsent(ExtensionValuesStore.java:89)
> 2021-12-08T13:56:59.5467999Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.execution.NamespaceAwareStore.getOrComputeIfAbsent(NamespaceAwareStore.java:53)
> 2021-12-08T13:56:59.5468791Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.lambda$beforeAll$2(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5469436Z Dec 08 13:56:59  at 
> java.util.ArrayList.forEach(ArrayList.java:1259)
> 2021-12-08T13:56:59.5470058Z Dec 08 13:56:59  at 
> org.testcontainers.junit.jupiter.TestcontainersExtension.beforeAll(TestcontainersExtension.java:59)
> 2021-12-08T13:56:59.5470846Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$10(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5471641Z Dec 08 13:56:59  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
> 2021-12-08T13:56:59.5472403Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:381)
> 2021-12-08T13:56:59.5473190Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:205)
> 2021-12-08T13:56:59.5474001Z Dec 08 13:56:59  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:80)
> 2021-12-08T13:56:59.5474759Z Dec 08 13:56:59  at 
> 

[jira] [Resolved] (FLINK-24323) Port ElasticSearch Sink to new Unified Sink API (FLIP-143)

2021-12-08 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss resolved FLINK-24323.
--
Release Note: 
`ElasticsearchXSinkBuilder` supersedes `ElasticsearchSink.Builder` and provides 
at-least-once writing with the new unified sink interface supporting both batch 
and streaming mode of DataStream API. To upgrade, please stop with savepoint.

For Elasticsearch 7 users that use the old ElasticsearchSink interface 
(org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink) and 
depend on their own elasticsearch-rest-high-level-client version, updating the 
client dependency to a version >= 7.14.0 is required due to internal changes.
  Resolution: Fixed

> Port ElasticSearch Sink to new Unified Sink API (FLIP-143)
> --
>
> Key: FLINK-24323
> URL: https://issues.apache.org/jira/browse/FLINK-24323
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream, Table SQL / API
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Fabian Paul
>Priority: Major
>
> We want to port the current ElasticSearch Sink to the new Unified Sink API as 
> was done with the [Kafka 
> Sink|https://issues.apache.org/jira/browse/FLINK-22902].



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-25189) Update Elasticsearch Sinks to latest minor versions

2021-12-08 Thread Alexander Preuss (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Preuss resolved FLINK-25189.
--
Release Note: 
Elasticsearch libraries used by the connector are bumped to 7.15.2 and 6.8.20 
respectively. 

For Elasticsearch 7 users that use the old ElasticsearchSink interface 
(org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink) and 
depend on their own elasticsearch-rest-high-level-client version, will need to 
update the client dependency to a version >= 7.14.0 due to internal changes.
  Resolution: Fixed

> Update Elasticsearch Sinks to latest minor versions
> ---
>
> Key: FLINK-25189
> URL: https://issues.apache.org/jira/browse/FLINK-25189
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.15.0
>Reporter: Alexander Preuss
>Assignee: Alexander Preuss
>Priority: Major
>  Labels: pull-request-available
>
> We want to bump the elasticsearch dependencies used in the 
> elasticsearch-connector modules to their latest respective minor versions 
> (6.8.20 and 7.15.2)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   >