[GitHub] [flink] curcur edited a comment on pull request #17354: [FLINK-24200][streaming] Calculating maximum alignment time rather than using the constant value
curcur edited a comment on pull request #17354: URL: https://github.com/apache/flink/pull/17354#issuecomment-930773202 The change looks good to me. Thanks @akalash for fixing this! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on pull request #17354: [FLINK-24200][streaming] Calculating maximum alignment time rather than using the constant value
curcur commented on pull request #17354: URL: https://github.com/apache/flink/pull/17354#issuecomment-930773202 The change looks good to me. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #17354: [FLINK-24200][streaming] Calculating maximum alignment time rather than using the constant value
curcur commented on a change in pull request #17354: URL: https://github.com/apache/flink/pull/17354#discussion_r719048866 ## File path: flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/CheckpointBarrierTrackerTest.java ## @@ -607,13 +606,26 @@ public void testTwoLastBarriersOneByOne() throws Exception { ValidatingCheckpointHandler validator = new ValidatingCheckpointHandler(); inputGate = createCheckpointedInputGate(2, sequence, validator); -for (BufferOrEvent boe : sequence) { -assertEquals(boe, inputGate.pollNext().get()); -Thread.sleep(10); -} +// start checkpoint 1 +assertEquals(sequence[0], inputGate.pollNext().get()); +Thread.sleep(10); + +// start checkpoint 2 +long start = System.currentTimeMillis(); +assertEquals(sequence[1], inputGate.pollNext().get()); +Thread.sleep(1); Review comment: nit: maybe keep it as sleep(10)? Do you know whether the alignment timer works the same as the system timer? If not, the test may still be unstable until we introduce the manual clock? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] curcur commented on a change in pull request #17354: [FLINK-24200][streaming] Calculating maximum alignment time rather than using the constant value
curcur commented on a change in pull request #17354: URL: https://github.com/apache/flink/pull/17354#discussion_r719048866 ## File path: flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/io/CheckpointBarrierTrackerTest.java ## @@ -607,13 +606,26 @@ public void testTwoLastBarriersOneByOne() throws Exception { ValidatingCheckpointHandler validator = new ValidatingCheckpointHandler(); inputGate = createCheckpointedInputGate(2, sequence, validator); -for (BufferOrEvent boe : sequence) { -assertEquals(boe, inputGate.pollNext().get()); -Thread.sleep(10); -} +// start checkpoint 1 +assertEquals(sequence[0], inputGate.pollNext().get()); +Thread.sleep(10); + +// start checkpoint 2 +long start = System.currentTimeMillis(); +assertEquals(sequence[1], inputGate.pollNext().get()); +Thread.sleep(1); Review comment: nit: maybe keep it as sleep(10)? Do you know whether the alignment timer is the same as the system timer? If not, the test may still be unstable until we introduce the manual clock? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-24407) Pulsar connector chinese document link to Pulsar document location incorrectly.
[ https://issues.apache.org/jira/browse/FLINK-24407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422514#comment-17422514 ] Aiden Gong commented on FLINK-24407: Hi,[~jark] I will submit a pr to fix this issue.Please assign to me. > Pulsar connector chinese document link to Pulsar document location > incorrectly. > --- > > Key: FLINK-24407 > URL: https://issues.apache.org/jira/browse/FLINK-24407 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.14.0 >Reporter: Aiden Gong >Priority: Minor > Fix For: 1.14.1 > > Attachments: 企业微信截图_16329715717678.png > > > Pulsar connector chinese document link to Pulsar document location > incorrectly. > !企业微信截图_16329715717678.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24407) Pulsar connector chinese document link to Pulsar document location incorrectly.
[ https://issues.apache.org/jira/browse/FLINK-24407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aiden Gong updated FLINK-24407: --- Description: Pulsar connector chinese document link to Pulsar document location incorrectly. !企业微信截图_16329715717678.png! was:Pulsar connector chinese document link to Pulsar document location incorrectly. > Pulsar connector chinese document link to Pulsar document location > incorrectly. > --- > > Key: FLINK-24407 > URL: https://issues.apache.org/jira/browse/FLINK-24407 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.14.0 >Reporter: Aiden Gong >Priority: Minor > Fix For: 1.14.1 > > Attachments: 企业微信截图_16329715717678.png > > > Pulsar connector chinese document link to Pulsar document location > incorrectly. > !企业微信截图_16329715717678.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24407) Pulsar connector chinese document link to Pulsar document location incorrectly.
[ https://issues.apache.org/jira/browse/FLINK-24407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aiden Gong updated FLINK-24407: --- Description: Pulsar connector chinese document link to Pulsar document location incorrectly. > Pulsar connector chinese document link to Pulsar document location > incorrectly. > --- > > Key: FLINK-24407 > URL: https://issues.apache.org/jira/browse/FLINK-24407 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.14.0 >Reporter: Aiden Gong >Priority: Minor > Fix For: 1.14.1 > > Attachments: 企业微信截图_16329715717678.png > > > Pulsar connector chinese document link to Pulsar document location > incorrectly. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24407) Pulsar connector chinese document link to Pulsar document location incorrectly.
Aiden Gong created FLINK-24407: -- Summary: Pulsar connector chinese document link to Pulsar document location incorrectly. Key: FLINK-24407 URL: https://issues.apache.org/jira/browse/FLINK-24407 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: 1.14.0 Reporter: Aiden Gong Fix For: 1.14.1 Attachments: 企业微信截图_16329715717678.png !image-2021-09-30-11-13-19-031.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24407) Pulsar connector chinese document link to Pulsar document location incorrectly.
[ https://issues.apache.org/jira/browse/FLINK-24407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aiden Gong updated FLINK-24407: --- Description: (was: !image-2021-09-30-11-13-19-031.png!) > Pulsar connector chinese document link to Pulsar document location > incorrectly. > --- > > Key: FLINK-24407 > URL: https://issues.apache.org/jira/browse/FLINK-24407 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.14.0 >Reporter: Aiden Gong >Priority: Minor > Fix For: 1.14.1 > > Attachments: 企业微信截图_16329715717678.png > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24407) Pulsar connector chinese document link to Pulsar document location incorrectly.
[ https://issues.apache.org/jira/browse/FLINK-24407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aiden Gong updated FLINK-24407: --- Attachment: 企业微信截图_16329715717678.png > Pulsar connector chinese document link to Pulsar document location > incorrectly. > --- > > Key: FLINK-24407 > URL: https://issues.apache.org/jira/browse/FLINK-24407 > Project: Flink > Issue Type: Bug > Components: Documentation >Affects Versions: 1.14.0 >Reporter: Aiden Gong >Priority: Minor > Fix For: 1.14.1 > > Attachments: 企业微信截图_16329715717678.png > > > !image-2021-09-30-11-13-19-031.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17392: [release][docs] clean up release notes
flinkbot edited a comment on pull request #17392: URL: https://github.com/apache/flink/pull/17392#issuecomment-930668133 ## CI report: * b16b4227e25646c061c6448d534c96c14534c6e2 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24643) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17391: emphasised the changes of dependency names for the 1.14 release
flinkbot edited a comment on pull request #17391: URL: https://github.com/apache/flink/pull/17391#issuecomment-930559841 ## CI report: * bf9d7a6fe7c0aca4378681587c5fd41316894429 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24642) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-24406) JSONKeyValueDeserializationSchema code bug
胡剑 created FLINK-24406: -- Summary: JSONKeyValueDeserializationSchema code bug Key: FLINK-24406 URL: https://issues.apache.org/jira/browse/FLINK-24406 Project: Flink Issue Type: Bug Components: Connectors / Kafka Reporter: 胡剑 if record.key() or record.value() is an empty array, an exception will be thrown here, causing the entire program to hang I think it should be fixed like this if (record.key() != null && record.key().length > 0) { node.set("key", mapper.readValue(record.key(), JsonNode.class)); } if (record.value() != null && record.value().length > 0) { node.set("value", mapper.readValue(record.value(), JsonNode.class)); } -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24390) Python 'build_wheels mac' fails on azure
[ https://issues.apache.org/jira/browse/FLINK-24390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422493#comment-17422493 ] Xintong Song commented on FLINK-24390: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24637=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=27866 > Python 'build_wheels mac' fails on azure > > > Key: FLINK-24390 > URL: https://issues.apache.org/jira/browse/FLINK-24390 > Project: Flink > Issue Type: Bug > Components: API / Python, Build System >Affects Versions: 1.12.5 >Reporter: Xintong Song >Priority: Blocker > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24547=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17982 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-18634) FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout expired after 60000milliseconds while awaiting InitProducerId"
[ https://issues.apache.org/jira/browse/FLINK-18634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422492#comment-17422492 ] Xintong Song commented on FLINK-18634: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24637=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6232 > FlinkKafkaProducerITCase.testRecoverCommittedTransaction failed with "Timeout > expired after 6milliseconds while awaiting InitProducerId" > > > Key: FLINK-18634 > URL: https://issues.apache.org/jira/browse/FLINK-18634 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0 >Reporter: Dian Fu >Assignee: Arvid Heise >Priority: Major > Labels: auto-unassigned, test-stability > Fix For: 1.15.0, 1.14.1 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4590=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20 > {code} > 2020-07-17T11:43:47.9693015Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, > Skipped: 0, Time elapsed: 269.399 s <<< FAILURE! - in > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase > 2020-07-17T11:43:47.9693862Z [ERROR] > testRecoverCommittedTransaction(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase) > Time elapsed: 60.679 s <<< ERROR! > 2020-07-17T11:43:47.9694737Z org.apache.kafka.common.errors.TimeoutException: > org.apache.kafka.common.errors.TimeoutException: Timeout expired after > 6milliseconds while awaiting InitProducerId > 2020-07-17T11:43:47.9695376Z Caused by: > org.apache.kafka.common.errors.TimeoutException: Timeout expired after > 6milliseconds while awaiting InitProducerId > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-23405) FlinkKafkaProducerMigrationOperatorTest.testRestoreProducer fails due to BindException
[ https://issues.apache.org/jira/browse/FLINK-23405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422491#comment-17422491 ] Xintong Song commented on FLINK-23405: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24598=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=576aba0a-d787-51b6-6a92-cf233f360582=7132 > FlinkKafkaProducerMigrationOperatorTest.testRestoreProducer fails due to > BindException > -- > > Key: FLINK-23405 > URL: https://issues.apache.org/jira/browse/FLINK-23405 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.14.0 >Reporter: Xintong Song >Priority: Major > Labels: test-stability > Fix For: 1.15.0, 1.14.1 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20523=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6851 > {code} > Jul 15 21:23:44 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, > Time elapsed: 102.561 s <<< FAILURE! - in > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerMigrationOperatorTest > Jul 15 21:23:44 [ERROR] testRestoreProducer[Migration Savepoint: > 1.10](org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerMigrationOperatorTest) > Time elapsed: 2.015 s <<< ERROR! > Jul 15 21:23:44 java.net.BindException: Address already in use > Jul 15 21:23:44 at sun.nio.ch.Net.bind0(Native Method) > Jul 15 21:23:44 at sun.nio.ch.Net.bind(Net.java:461) > Jul 15 21:23:44 at sun.nio.ch.Net.bind(Net.java:453) > Jul 15 21:23:44 at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:222) > Jul 15 21:23:44 at > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:85) > Jul 15 21:23:44 at > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:78) > Jul 15 21:23:44 at > org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:90) > Jul 15 21:23:44 at > org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:120) > Jul 15 21:23:44 at > org.apache.curator.test.TestingZooKeeperMain.runFromConfig(TestingZooKeeperMain.java:93) > Jul 15 21:23:44 at > org.apache.curator.test.TestingZooKeeperServer$1.run(TestingZooKeeperServer.java:148) > Jul 15 21:23:44 at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-21834: - Labels: test-stability (was: auto-deprioritized-major test-stability) > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2 >Reporter: Guowei Ma >Priority: Major > Labels: test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422490#comment-17422490 ] Xintong Song commented on FLINK-21834: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24597=logs=e9af9cde-9a65-5281-a58e-2c8511d36983=b6c4efed-9c7d-55ea-03a9-9bd7d5b08e4c=9923 > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2 >Reporter: Guowei Ma >Priority: Major > Labels: auto-deprioritized-major, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-21834: - Priority: Major (was: Minor) > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2 >Reporter: Guowei Ma >Priority: Major > Labels: auto-deprioritized-major, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail
[ https://issues.apache.org/jira/browse/FLINK-21834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-21834: - Affects Version/s: 1.13.2 > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset > fail > - > > Key: FLINK-21834 > URL: https://issues.apache.org/jira/browse/FLINK-21834 > Project: Flink > Issue Type: Bug > Components: FileSystems >Affects Versions: 1.12.2, 1.13.2 >Reporter: Guowei Ma >Priority: Minor > Labels: auto-deprioritized-major, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=10893 > Maybe we need print what the exception is when `recover` is called. > {code:java} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.fail(Assert.java:95) > at > org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-24405) KafkaWriterITCase.testLingeringTransaction fails on azure
Xintong Song created FLINK-24405: Summary: KafkaWriterITCase.testLingeringTransaction fails on azure Key: FLINK-24405 URL: https://issues.apache.org/jira/browse/FLINK-24405 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.15.0 Reporter: Xintong Song Fix For: 1.15.0 https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24595=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=e424005a-b16e-540f-196d-da062cc19bdf=7344 {code} Sep 28 22:36:05 [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 35.773 s <<< FAILURE! - in org.apache.flink.connector.kafka.sink.KafkaWriterITCase Sep 28 22:36:05 [ERROR] testLingeringTransaction Time elapsed: 3.07 s <<< FAILURE! Sep 28 22:36:05 java.lang.AssertionError: Sep 28 22:36:05 Sep 28 22:36:05 Expected: a collection with size <1> Sep 28 22:36:05 but: collection size was <0> Sep 28 22:36:05 at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) Sep 28 22:36:05 at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) Sep 28 22:36:05 at org.apache.flink.connector.kafka.sink.KafkaWriterITCase.testLingeringTransaction(KafkaWriterITCase.java:213) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-24390) Python 'build_wheels mac' fails on azure
[ https://issues.apache.org/jira/browse/FLINK-24390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song updated FLINK-24390: - Priority: Blocker (was: Major) > Python 'build_wheels mac' fails on azure > > > Key: FLINK-24390 > URL: https://issues.apache.org/jira/browse/FLINK-24390 > Project: Flink > Issue Type: Bug > Components: API / Python, Build System >Affects Versions: 1.12.5 >Reporter: Xintong Song >Priority: Blocker > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24547=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17982 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-24390) Python 'build_wheels mac' fails on azure
[ https://issues.apache.org/jira/browse/FLINK-24390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422489#comment-17422489 ] Xintong Song commented on FLINK-24390: -- https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24596=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17992 > Python 'build_wheels mac' fails on azure > > > Key: FLINK-24390 > URL: https://issues.apache.org/jira/browse/FLINK-24390 > Project: Flink > Issue Type: Bug > Components: API / Python, Build System >Affects Versions: 1.12.5 >Reporter: Xintong Song >Priority: Major > Fix For: 1.12.6 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24547=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=789348ee-cf3e-5c4b-7c78-355970e5f360=17982 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #17392: [release][docs] clean up release notes
flinkbot edited a comment on pull request #17392: URL: https://github.com/apache/flink/pull/17392#issuecomment-930668133 ## CI report: * b16b4227e25646c061c6448d534c96c14534c6e2 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24643) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17392: [release][docs] clean up release notes
flinkbot commented on pull request #17392: URL: https://github.com/apache/flink/pull/17392#issuecomment-930668133 ## CI report: * b16b4227e25646c061c6448d534c96c14534c6e2 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17392: [release][docs] clean up release notes
flinkbot commented on pull request #17392: URL: https://github.com/apache/flink/pull/17392#issuecomment-930657057 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit b16b4227e25646c061c6448d534c96c14534c6e2 (Thu Sep 30 00:55:04 UTC 2021) **Warnings:** * Documentation files were touched, but no `docs/content.zh/` files: Update Chinese documentation or file Jira ticket. * **Invalid pull request title: No valid Jira ID provided** Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] infoverload opened a new pull request #17392: [release][docs] clean up release notes
infoverload opened a new pull request #17392: URL: https://github.com/apache/flink/pull/17392 ## What is the purpose of the change Clean up release notes ## Brief changelog - fixed typos, reworded some things, enforced consistency, added links, etc ## Verifying this change This change is a trivial rework / code cleanup without any test coverage. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17379: Update kafka.md
flinkbot commented on pull request #17379: URL: https://github.com/apache/flink/pull/17379#issuecomment-929793876 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] YuriGusev commented on pull request #17382: Flink 16504 dynamodb connector backpressure
YuriGusev commented on pull request #17382: URL: https://github.com/apache/flink/pull/17382#issuecomment-930078707 Created by mistake, dropping it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17363: [FLINK-24324][connectors/elasticsearch] Add Elasticsearch 7 sink based on FLIP-143
flinkbot edited a comment on pull request #17363: URL: https://github.com/apache/flink/pull/17363#issuecomment-927758743 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangyang0918 merged pull request #17371: [BP-1.13][FLINK-24380][k8s] Terminate the pod if it failed
wangyang0918 merged pull request #17371: URL: https://github.com/apache/flink/pull/17371 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17383: [hotfix][build] Add java8 profile
flinkbot commented on pull request #17383: URL: https://github.com/apache/flink/pull/17383#issuecomment-930099288 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17365: [hotfix][web]Modify the configuration of prettier
flinkbot edited a comment on pull request #17365: URL: https://github.com/apache/flink/pull/17365#issuecomment-928705859 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17381: [FLINK-24399][table-common] Make handling of DataType less verbose for connector developers
flinkbot commented on pull request #17381: URL: https://github.com/apache/flink/pull/17381#issuecomment-930043917 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leo65535 commented on pull request #17383: [hotfix][build] Add java8 profile
leo65535 commented on pull request #17383: URL: https://github.com/apache/flink/pull/17383#issuecomment-930096545 cc @wuchong -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangyang0918 merged pull request #17361: [FLINK-24380][k8s] Terminate the pod if it failed
wangyang0918 merged pull request #17361: URL: https://github.com/apache/flink/pull/17361 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17385: [FLINK-24367][tests] Add FallbackAkkaRpcSystemLoader
flinkbot commented on pull request #17385: URL: https://github.com/apache/flink/pull/17385#issuecomment-930154427 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] dawidwys commented on pull request #468: Release blog post 1.14 draft iteration
dawidwys commented on pull request #468: URL: https://github.com/apache/flink-web/pull/468#issuecomment-930059168 Thanks for the work! Merging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] xintongsong commented on pull request #17372: [BP-1.13][FLINK-24377][runtime] Fix TM potentially not released after heartbeat timeout.
xintongsong commented on pull request #17372: URL: https://github.com/apache/flink/pull/17372#issuecomment-929750085 Merged: 1057f4171645b48c7743571e9f90e20b94a64900 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MartijnVisser commented on pull request #14544: [FLINK-20845] Drop Scala 2.11 support
MartijnVisser commented on pull request #14544: URL: https://github.com/apache/flink/pull/14544#issuecomment-930028688 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] NickBurkard commented on a change in pull request #14544: [FLINK-20845] Drop Scala 2.11 support
NickBurkard commented on a change in pull request #14544: URL: https://github.com/apache/flink/pull/14544#discussion_r718591390 ## File path: flink-rpc/flink-rpc-akka/pom.xml ## @@ -69,6 +69,11 @@ under the License. scala-compiler compile + + org.scala-lang.modules + scala-parser-combinators_${scala.binary.version} Review comment: > > > please move this down into the `` section; that way we don't change the structure of the dependency tree, but just modify the version. (i.e., the dependency tree still shows where this dependency came from) Done! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17388: [release] Make dependency names changes more prominent in the release notes
flinkbot commented on pull request #17388: URL: https://github.com/apache/flink/pull/17388#issuecomment-930191678 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit c11ee278896d32b847e7a32855c7fd46d6dfe8b8 (Wed Sep 29 13:46:16 UTC 2021) **Warnings:** * **Invalid pull request title: No valid Jira ID provided** Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sjwiesman closed pull request #17356: [FLINK-23313][docs] Reintroduce temporal table function documentation
sjwiesman closed pull request #17356: URL: https://github.com/apache/flink/pull/17356 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17316: [FLINK-24269][Runtime / Checkpointing] Rename methods around final ch…
flinkbot edited a comment on pull request #17316: URL: https://github.com/apache/flink/pull/17316#issuecomment-921909984 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17383: [hotfix][build] Add java8 profile
flinkbot edited a comment on pull request #17383: URL: https://github.com/apache/flink/pull/17383#issuecomment-930128798 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang commented on pull request #17377: [hotfix][docs] Move out the time zone page from streaming concepts section
leonardBang commented on pull request #17377: URL: https://github.com/apache/flink/pull/17377#issuecomment-929697510 Appreciate if you can help merge @sjwiesman , I don't have merge authority yet. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] ljdavns commented on pull request #17325: [FLINK-24337] fix WEB UI build failure on windows
ljdavns commented on pull request #17325: URL: https://github.com/apache/flink/pull/17325#issuecomment-929887095 > The PR title / commit message seem to reference the wrong JIRA issue? Sry, it's [FLINK-24337] -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] dawidwys closed pull request #17388: [release] Make dependency names changes more prominent in the release notes
dawidwys closed pull request #17388: URL: https://github.com/apache/flink/pull/17388 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17381: [FLINK-24399][table-common] Make handling of DataType less verbose for connector developers
flinkbot edited a comment on pull request #17381: URL: https://github.com/apache/flink/pull/17381#issuecomment-930061717 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17391: emphasised the changes of dependency names for the 1.14 release
flinkbot edited a comment on pull request #17391: URL: https://github.com/apache/flink/pull/17391#issuecomment-930559841 ## CI report: * bf9d7a6fe7c0aca4378681587c5fd41316894429 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24642) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] jherico commented on a change in pull request #17360: [FLINK-24379][Formats] Add support for Glue schema registry in Table API
jherico commented on a change in pull request #17360: URL: https://github.com/apache/flink/pull/17360#discussion_r718067814 ## File path: docs/data/sql_connectors.yml ## @@ -48,6 +48,12 @@ avro-confluent: category: format sql_url: https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/$version/flink-sql-avro-confluent-registry-$version.jar +avro-glue: +name: Avro Schema Registry Review comment: fixed ## File path: docs/content/docs/connectors/table/formats/avro-glue.md ## @@ -0,0 +1,191 @@ +--- +title: Confluent Avro Review comment: fixed ## File path: docs/content/docs/connectors/table/formats/avro-glue.md ## @@ -0,0 +1,191 @@ +--- +title: Confluent Avro +weight: 4 +type: docs +aliases: + - /dev/table/connectors/formats/avro-confluent.html +--- + + +# Confluent Avro Format Review comment: fixed ## File path: flink-formats/flink-avro-glue-schema-registry/src/main/java/org/apache/flink/formats/avro/glue/schema/registry/GlueSchemaRegistryAvroFormatFactory.java ## @@ -0,0 +1,162 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + package org.apache.flink.formats.avro.glue.schema.registry; + +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.AUTO_REGISTRATION; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.AWS_REGION; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.CACHE_SIZE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.CACHE_TTL_MS; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.COMPATIBILITY; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.COMPRESSION_TYPE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.ENDPOINT; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.RECORD_TYPE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.REGISTRY_NAME; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.SCHEMA_REGISTRY_SUBJECT; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +import com.amazonaws.services.schemaregistry.utils.AWSSchemaRegistryConstants; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.serialization.DeserializationSchema; +import org.apache.flink.api.common.serialization.SerializationSchema; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.formats.avro.AvroRowDataDeserializationSchema; +import org.apache.flink.formats.avro.AvroRowDataSerializationSchema; +import org.apache.flink.formats.avro.AvroToRowDataConverters; +import org.apache.flink.formats.avro.RowDataToAvroConverters; +import org.apache.flink.formats.avro.typeutils.AvroSchemaConverter; +import org.apache.flink.table.connector.ChangelogMode; +import org.apache.flink.table.connector.format.DecodingFormat; +import org.apache.flink.table.connector.format.EncodingFormat; +import org.apache.flink.table.connector.sink.DynamicTableSink; +import org.apache.flink.table.connector.source.DynamicTableSource; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.factories.DeserializationFormatFactory; +import org.apache.flink.table.factories.DynamicTableFactory; +import org.apache.flink.table.factories.FactoryUtil; +import org.apache.flink.table.factories.SerializationFormatFactory; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.RowType; + +/** + * Table format factory for providing configured instances of AWS Glue Schema + * Registry Avro to RowData {@link SerializationSchema} and + * {@link DeserializationSchema}. + */ +@Internal +public class GlueSchemaRegistryAvroFormatFactory implements DeserializationFormatFactory, SerializationFormatFactory { +
[GitHub] [flink] matriv commented on a change in pull request #17343: [hotfix][docs] Various doc enhancements around filesystem table connector and json
matriv commented on a change in pull request #17343: URL: https://github.com/apache/flink/pull/17343#discussion_r718258195 ## File path: docs/content/docs/connectors/table/filesystem.md ## @@ -102,6 +102,12 @@ The file system connector supports multiple formats: - Canal-JSON: [canal-json]({{< ref "docs/connectors/table/formats/canal" >}}). - Raw: [raw]({{< ref "docs/connectors/table/formats/raw" >}}). +## Source + +The file system connector can be used to read single files or entire directories into a single table. + +When using a directory as the source path, there is no defined order of ingestion for the files inside the directory. Review comment: Looks, good! thank you!! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17360: [FLINK-24379][Formats] Add support for Glue schema registry in Table API
flinkbot edited a comment on pull request #17360: URL: https://github.com/apache/flink/pull/17360#issuecomment-927544424 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17371: [BP-1.13][FLINK-24380][k8s] Terminate the pod if it failed
flinkbot edited a comment on pull request #17371: URL: https://github.com/apache/flink/pull/17371#issuecomment-929030407 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] MartijnVisser commented on a change in pull request #17360: [FLINK-24379][Formats] Add support for Glue schema registry in Table API
MartijnVisser commented on a change in pull request #17360: URL: https://github.com/apache/flink/pull/17360#discussion_r718315208 ## File path: docs/content/docs/connectors/table/formats/avro-glue.md ## @@ -0,0 +1,191 @@ +--- +title: AWS Glue Avro +weight: 4 +type: docs +aliases: + - /dev/table/connectors/formats/avro-glue.html +--- + + +# AWS Glue Avro Format + +{{< label "Format: Serialization Schema" >}} +{{< label "Format: Deserialization Schema" >}} + +The Glue Schema Registry (``avro-glue``) format allows you to read records that were serialized by the ``com.amazonaws.services.schemaregistry.serializers.avro.AWSKafkaAvroSerializer`` and to write records that can in turn be read by the ``com.amazonaws.services.schemaregistry.deserializers.avro.AWSKafkaAvroDeserializer``. These records have their schemas stored out-of-band in a configured registry provided by the AWS Glue Schema Registry [service](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html#schema-registry-schemas). + +When reading (deserializing) a record with this format the Avro writer schema is fetched from the configured AWS Glue Schema Registry based on the schema version id encoded in the record while the reader schema is inferred from table schema. + +When writing (serializing) a record with this format the Avro schema is inferred from the table schema and used to retrieve a schema id to be encoded with the data. The lookup is performed against the configured AWS Glue Schema Registry under the [value](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html#schema-registry-schemas) given in `avro-glue.schema-name`. + +The Avro Glue Schema Registry format can only be used in conjunction with the [Apache Kafka SQL connector]({{< ref "docs/connectors/table/kafka" >}}) or the [Upsert Kafka SQL Connector]({{< ref "docs/connectors/table/upsert-kafka" >}}). + +Dependencies + + +{{< sql_download_table "avro-glue" >}} + +How to create tables with Avro-Glue format +-- + +Example of a table using raw UTF-8 string as Kafka key and Avro records registered in the Schema Registry as Kafka values: + +```sql +CREATE TABLE user_created ( + + -- one column mapped to the Kafka raw UTF-8 key + the_kafka_key STRING, + + -- a few columns mapped to the Avro fields of the Kafka value + id STRING, + name STRING, + email STRING + +) WITH ( + + 'connector' = 'kafka', + 'topic' = 'user_events_example1', + 'properties.bootstrap.servers' = 'localhost:9092', + + -- UTF-8 string as Kafka keys, using the 'the_kafka_key' table column + 'key.format' = 'raw', + 'key.fields' = 'the_kafka_key', + + 'value.format' = 'avro-glue', + 'value.avro-glue.region' = 'us-east-1', + 'value.avro-glue.registry.name' = 'my-schema-registry', + 'value.avro-glue.schema-name' = 'my-schema-name', + 'value.fields-include' = 'EXCEPT_KEY' +) +``` + +Format Options + + +Yes, these options have inconsistent naming convnetions. No, I can't fix it. This is for consistentcy with the existing [AWS Glue client code](https://github.com/awslabs/aws-glue-schema-registry/blob/master/common/src/main/java/com/amazonaws/services/schemaregistry/utils/AWSSchemaRegistryConstants.java#L20). Review comment: I don't think this sentence should be in the docs? ## File path: docs/content/docs/connectors/table/formats/avro-glue.md ## @@ -0,0 +1,191 @@ +--- +title: AWS Glue Avro +weight: 4 +type: docs +aliases: + - /dev/table/connectors/formats/avro-glue.html +--- + + +# AWS Glue Avro Format + +{{< label "Format: Serialization Schema" >}} +{{< label "Format: Deserialization Schema" >}} + +The Glue Schema Registry (``avro-glue``) format allows you to read records that were serialized by the ``com.amazonaws.services.schemaregistry.serializers.avro.AWSKafkaAvroSerializer`` and to write records that can in turn be read by the ``com.amazonaws.services.schemaregistry.deserializers.avro.AWSKafkaAvroDeserializer``. These records have their schemas stored out-of-band in a configured registry provided by the AWS Glue Schema Registry [service](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html#schema-registry-schemas). + +When reading (deserializing) a record with this format the Avro writer schema is fetched from the configured AWS Glue Schema Registry based on the schema version id encoded in the record while the reader schema is inferred from table schema. + +When writing (serializing) a record with this format the Avro schema is inferred from the table schema and used to retrieve a schema id to be encoded with the data. The lookup is performed against the configured AWS Glue Schema Registry under the [value](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html#schema-registry-schemas) given in `avro-glue.schema-name`. + +The Avro Glue Schema Registry format can only be used in conjunction with the [Apache Kafka SQL connector]({{< ref
[GitHub] [flink] xintongsong closed pull request #17372: [BP-1.13][FLINK-24377][runtime] Fix TM potentially not released after heartbeat timeout.
xintongsong closed pull request #17372: URL: https://github.com/apache/flink/pull/17372 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17386: Flink 16504 dynamodb connector rebased batchsize
flinkbot commented on pull request #17386: URL: https://github.com/apache/flink/pull/17386#issuecomment-930172897 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 217798078b47bbee1e89a06376b057750e7796dc (Wed Sep 29 13:24:45 UTC 2021) **Warnings:** * **5 pom.xml files were touched**: Check for build and licensing issues. * No documentation files were touched! Remember to keep the Flink docs up to date! * **Invalid pull request title: No valid Jira ID provided** Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sjwiesman commented on a change in pull request #17377: [hotfix][docs] Move out the time zone page from streaming concepts section
sjwiesman commented on a change in pull request #17377: URL: https://github.com/apache/flink/pull/17377#discussion_r717994901 ## File path: docs/content.zh/docs/dev/table/timezone.md ## @@ -1,6 +1,6 @@ --- title: "时区" -weight: 4 +weight: 86 Review comment: Can you make this 22 so it appears after the data types page in the sidebar? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17380: [hotfix][docs] Fix typo in sources doc
flinkbot commented on pull request #17380: URL: https://github.com/apache/flink/pull/17380#issuecomment-929965247 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-web] dawidwys closed pull request #468: Release blog post 1.14 draft iteration
dawidwys closed pull request #468: URL: https://github.com/apache/flink-web/pull/468 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17370: [BP-1.14][FLINK-24380][k8s] Terminate the pod if it failed
flinkbot edited a comment on pull request #17370: URL: https://github.com/apache/flink/pull/17370#issuecomment-929030266 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-docker] dawidwys merged pull request #93: Update Dockerfiles for 1.14.0 release
dawidwys merged pull request #93: URL: https://github.com/apache/flink-docker/pull/93 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] YuriGusev closed pull request #17382: Flink 16504 dynamodb connector backpressure
YuriGusev closed pull request #17382: URL: https://github.com/apache/flink/pull/17382 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sjwiesman commented on pull request #17378: [hotfix][docs] Move out the time zone page from streaming concepts section #17377
sjwiesman commented on pull request #17378: URL: https://github.com/apache/flink/pull/17378#issuecomment-930558167 manually merged -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17384: [FLINK-24388][table] Modules can provide a table source/sink factory
flinkbot edited a comment on pull request #17384: URL: https://github.com/apache/flink/pull/17384#issuecomment-930147821 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sjwiesman closed pull request #17378: [hotfix][docs] Move out the time zone page from streaming concepts section #17377
sjwiesman closed pull request #17378: URL: https://github.com/apache/flink/pull/17378 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17343: [hotfix][docs] Various doc enhancements around filesystem table connector and json
flinkbot edited a comment on pull request #17343: URL: https://github.com/apache/flink/pull/17343#issuecomment-925923764 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] hhkkxxx133 commented on a change in pull request #17360: [FLINK-24379][Formats] Add support for Glue schema registry in Table API
hhkkxxx133 commented on a change in pull request #17360: URL: https://github.com/apache/flink/pull/17360#discussion_r718011735 ## File path: docs/data/sql_connectors.yml ## @@ -48,6 +48,12 @@ avro-confluent: category: format sql_url: https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-avro-confluent-registry/$version/flink-sql-avro-confluent-registry-$version.jar +avro-glue: +name: Avro Schema Registry Review comment: Can the name be distinguished from Confluent? For example Avro Glue Schema Registry? ## File path: flink-formats/flink-avro-glue-schema-registry/src/main/java/org/apache/flink/formats/avro/glue/schema/registry/GlueSchemaRegistryAvroFormatFactory.java ## @@ -0,0 +1,162 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + package org.apache.flink.formats.avro.glue.schema.registry; + +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.AUTO_REGISTRATION; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.AWS_REGION; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.CACHE_SIZE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.CACHE_TTL_MS; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.COMPATIBILITY; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.COMPRESSION_TYPE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.ENDPOINT; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.RECORD_TYPE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.REGISTRY_NAME; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.SCHEMA_REGISTRY_SUBJECT; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +import com.amazonaws.services.schemaregistry.utils.AWSSchemaRegistryConstants; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.serialization.DeserializationSchema; +import org.apache.flink.api.common.serialization.SerializationSchema; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.formats.avro.AvroRowDataDeserializationSchema; +import org.apache.flink.formats.avro.AvroRowDataSerializationSchema; +import org.apache.flink.formats.avro.AvroToRowDataConverters; +import org.apache.flink.formats.avro.RowDataToAvroConverters; +import org.apache.flink.formats.avro.typeutils.AvroSchemaConverter; +import org.apache.flink.table.connector.ChangelogMode; +import org.apache.flink.table.connector.format.DecodingFormat; +import org.apache.flink.table.connector.format.EncodingFormat; +import org.apache.flink.table.connector.sink.DynamicTableSink; +import org.apache.flink.table.connector.source.DynamicTableSource; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.factories.DeserializationFormatFactory; +import org.apache.flink.table.factories.DynamicTableFactory; +import org.apache.flink.table.factories.FactoryUtil; +import org.apache.flink.table.factories.SerializationFormatFactory; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.RowType; + +/** + * Table format factory for providing configured instances of AWS Glue Schema + * Registry Avro to RowData {@link SerializationSchema} and + * {@link DeserializationSchema}. + */ +@Internal +public class GlueSchemaRegistryAvroFormatFactory implements DeserializationFormatFactory, SerializationFormatFactory { +public static final String IDENTIFIER = "avro-glue"; + +@Override +public DecodingFormat> createDecodingFormat(DynamicTableFactory.Context context, +ReadableConfig formatOptions) { +FactoryUtil.validateFactoryOptions(this, formatOptions); +final Map configMap = buildConfigMap(formatOptions); + +return new DecodingFormat>() { +
[GitHub] [flink] twalthr commented on a change in pull request #17381: [FLINK-24399][table-common] Make handling of DataType less verbose for connector developers
twalthr commented on a change in pull request #17381: URL: https://github.com/apache/flink/pull/17381#discussion_r718413128 ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/DataTypes.java ## @@ -96,6 +97,15 @@ @PublicEvolving public final class DataTypes { +/** + * Create {@link DataType} from a {@link LogicalType}. + * + * @return the {@link LogicalType} converted to a {@link DataType}. + */ +public static DataType of(LogicalType logicalType) { Review comment: please split every API change into a separate commit ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/FieldsDataType.java ## @@ -57,24 +63,105 @@ public FieldsDataType(LogicalType logicalType, List fieldDataTypes) { } @Override -public DataType notNull() { +public FieldsDataType notNull() { Review comment: this is API breaking, we should not do this. as we have seen recently on the dev@ ML ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/utils/DataTypeUtils.java ## @@ -334,12 +334,15 @@ public static ResolvedSchema expandCompositeTypeToSchema(DataType dataType) { return Collections.singletonList(dataType); } -/** Returns the names of the flat representation in the first level of the given data type. */ +/** + * Returns the names of the flat representation of the given data type. In case of {@link + * StructuredType}, returns the deep list of field names of the structure. Review comment: the original JavaDoc was more correct: `in the first level` we don't do it deep in the sense of going through children as well ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/DataType.java ## @@ -86,10 +87,29 @@ public LogicalType getLogicalType() { return conversionClass; } +/** + * Returns the children of this data type, if any. Returns an empty list if this data type is + * atomic. + * + * @return the children data types + */ public abstract List getChildren(); +/** + * Visit this data type. + * + * @return the result of the visit + */ public abstract R accept(DataTypeVisitor visitor); +/** + * Creates a {@link DataType} from this instance with internal data structures conversion + * classes. Review comment: give an example e.g. `RowData` instead of `Row`, mention that it also updated the nested fields ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/DynamicTableFactory.java ## @@ -85,5 +88,23 @@ /** Whether the table is temporary. */ boolean isTemporary(); + +/** + * Shortcut for {@code getCatalogTable().getResolvedSchema().toPhysicalRowDataType()}. + * + * @see ResolvedSchema#toPhysicalRowDataType() + */ +default FieldsDataType getPhysicalRowDataType() { +return getCatalogTable().getResolvedSchema().toPhysicalRowDataType(); +} + +/** + * Shortcut for {@code getCatalogTable().getResolvedSchema().getPrimaryKeyFields()}. + * + * @see ResolvedSchema#getPrimaryKeyFields() + */ +default List getPrimaryKeyFields() { Review comment: as mentioned in the JIRA issue, a `int[]` would be more useful. in general, we don't handle this nicely at the moment. actually according to SQL a column must not have a unique name. positions might be more useful for connector developers. ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/DataType.java ## @@ -86,10 +87,29 @@ public LogicalType getLogicalType() { return conversionClass; } +/** + * Returns the children of this data type, if any. Returns an empty list if this data type is + * atomic. + * + * @return the children data types + */ public abstract List getChildren(); +/** + * Visit this data type. + * + * @return the result of the visit + */ public abstract R accept(DataTypeVisitor visitor); Review comment: we should avoid meaningless JavaDocs that confuse more than they help. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors] FLIP-171: Added Kinesis Data Streams Sink i…
flinkbot edited a comment on pull request #17345: URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17387: [WIP][FLINK-23486][state] Changelog Backend metrics
flinkbot edited a comment on pull request #17387: URL: https://github.com/apache/flink/pull/17387#issuecomment-930245327 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17391: emphasised the changes of dependency names for the 1.14 release
flinkbot commented on pull request #17391: URL: https://github.com/apache/flink/pull/17391#issuecomment-930531520 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sjwiesman closed pull request #17376: [FLINK-23313][docs] Reintroduce temporal table function documentation
sjwiesman closed pull request #17376: URL: https://github.com/apache/flink/pull/17376 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr commented on pull request #17341: [FLINK-24394][test] Refactor BuiltInFunctions IT Tests
twalthr commented on pull request #17341: URL: https://github.com/apache/flink/pull/17341#issuecomment-930299563 @matriv I haven't merged this to 1.14 yet. We can do this on demand. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17380: [hotfix][docs] Fix typo in sources doc
flinkbot edited a comment on pull request #17380: URL: https://github.com/apache/flink/pull/17380#issuecomment-929996988 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] slinkydeveloper commented on a change in pull request #17381: [FLINK-24399][table-common] Make handling of DataType less verbose for connector developers
slinkydeveloper commented on a change in pull request #17381: URL: https://github.com/apache/flink/pull/17381#discussion_r718426788 ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/utils/DataTypeUtils.java ## @@ -334,12 +334,15 @@ public static ResolvedSchema expandCompositeTypeToSchema(DataType dataType) { return Collections.singletonList(dataType); } -/** Returns the names of the flat representation in the first level of the given data type. */ +/** + * Returns the names of the flat representation of the given data type. In case of {@link + * StructuredType}, returns the deep list of field names of the structure. Review comment: This particular method goes deep into a structured type: https://github.com/apache/flink/blob/276e847efbfe39a567a8033dd52ff9245aee9de1/flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/utils/LogicalTypeChecks.java#L483 ## File path: flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/DynamicTableFactory.java ## @@ -85,5 +88,23 @@ /** Whether the table is temporary. */ boolean isTemporary(); + +/** + * Shortcut for {@code getCatalogTable().getResolvedSchema().toPhysicalRowDataType()}. + * + * @see ResolvedSchema#toPhysicalRowDataType() + */ +default FieldsDataType getPhysicalRowDataType() { +return getCatalogTable().getResolvedSchema().toPhysicalRowDataType(); +} + +/** + * Shortcut for {@code getCatalogTable().getResolvedSchema().getPrimaryKeyFields()}. + * + * @see ResolvedSchema#getPrimaryKeyFields() + */ +default List getPrimaryKeyFields() { Review comment: Gotcha, what about naming it `getPrimaryKeyIndexes` or `getPrimaryKeyFieldsIndexes`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17382: Flink 16504 dynamodb connector backpressure
flinkbot commented on pull request #17382: URL: https://github.com/apache/flink/pull/17382#issuecomment-930081537 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 4791f1e5a25dc6a0f635d3a6e1e5965db7c8ded2 (Wed Sep 29 11:15:56 UTC 2021) **Warnings:** * **5 pom.xml files were touched**: Check for build and licensing issues. * No documentation files were touched! Remember to keep the Flink docs up to date! * **Invalid pull request title: No valid Jira ID provided** Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] sjwiesman closed pull request #17377: [hotfix][docs] Move out the time zone page from streaming concepts section
sjwiesman closed pull request #17377: URL: https://github.com/apache/flink/pull/17377 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] nirtsruya closed pull request #17386: Flink 16504 dynamodb connector rebased batchsize
nirtsruya closed pull request #17386: URL: https://github.com/apache/flink/pull/17386 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #17384: [FLINK-24388][table] Modules can provide a table source/sink factory
flinkbot commented on pull request #17384: URL: https://github.com/apache/flink/pull/17384#issuecomment-930125873 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] iyupeng commented on pull request #17344: [FLINK-20895] [flink-table-planner] support local aggregate push down in table planner
iyupeng commented on pull request #17344: URL: https://github.com/apache/flink/pull/17344#issuecomment-929774236 @wuchong @godfreyhe Please take a look, thanks a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] leonardBang commented on pull request #17356: [FLINK-23313][docs] Reintroduce temporal table function documentation
leonardBang commented on pull request #17356: URL: https://github.com/apache/flink/pull/17356#issuecomment-929697815 That's great if you can help merge @sjwiesman , thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17385: [FLINK-24367][tests] Add FallbackAkkaRpcSystemLoader
flinkbot edited a comment on pull request #17385: URL: https://github.com/apache/flink/pull/17385#issuecomment-930178250 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17377: [hotfix][docs] Move out the time zone page from streaming concepts section
flinkbot edited a comment on pull request #17377: URL: https://github.com/apache/flink/pull/17377#issuecomment-929395259 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] brachi-wernick commented on pull request #16596: [FLINK-23495] [Connectors / Google Cloud PubSub] Make checkpoint optional for preview/staging mode
brachi-wernick commented on pull request #16596: URL: https://github.com/apache/flink/pull/16596#issuecomment-930464305 Any chance I can get review on this? we talked on this direction in ticket FLINK-23495. I would want also to add this to https://github.com/apache/flink/pull/16598 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] Airblader commented on a change in pull request #17360: [FLINK-24379][Formats] Add support for Glue schema registry in Table API
Airblader commented on a change in pull request #17360: URL: https://github.com/apache/flink/pull/17360#discussion_r718326427 ## File path: flink-formats/flink-avro-glue-schema-registry/src/main/java/org/apache/flink/formats/avro/glue/schema/registry/GlueSchemaRegistryAvroFormatFactory.java ## @@ -0,0 +1,162 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + package org.apache.flink.formats.avro.glue.schema.registry; + +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.AUTO_REGISTRATION; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.AWS_REGION; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.CACHE_SIZE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.CACHE_TTL_MS; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.COMPATIBILITY; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.COMPRESSION_TYPE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.ENDPOINT; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.RECORD_TYPE; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.REGISTRY_NAME; +import static org.apache.flink.formats.avro.glue.schema.registry.AvroGlueFormatOptions.SCHEMA_REGISTRY_SUBJECT; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +import com.amazonaws.services.schemaregistry.utils.AWSSchemaRegistryConstants; + +import org.apache.flink.annotation.Internal; +import org.apache.flink.api.common.serialization.DeserializationSchema; +import org.apache.flink.api.common.serialization.SerializationSchema; +import org.apache.flink.api.common.typeinfo.TypeInformation; +import org.apache.flink.configuration.ConfigOption; +import org.apache.flink.configuration.ReadableConfig; +import org.apache.flink.formats.avro.AvroRowDataDeserializationSchema; +import org.apache.flink.formats.avro.AvroRowDataSerializationSchema; +import org.apache.flink.formats.avro.AvroToRowDataConverters; +import org.apache.flink.formats.avro.RowDataToAvroConverters; +import org.apache.flink.formats.avro.typeutils.AvroSchemaConverter; +import org.apache.flink.table.connector.ChangelogMode; +import org.apache.flink.table.connector.format.DecodingFormat; +import org.apache.flink.table.connector.format.EncodingFormat; +import org.apache.flink.table.connector.sink.DynamicTableSink; +import org.apache.flink.table.connector.source.DynamicTableSource; +import org.apache.flink.table.data.RowData; +import org.apache.flink.table.factories.DeserializationFormatFactory; +import org.apache.flink.table.factories.DynamicTableFactory; +import org.apache.flink.table.factories.FactoryUtil; +import org.apache.flink.table.factories.SerializationFormatFactory; +import org.apache.flink.table.types.DataType; +import org.apache.flink.table.types.logical.RowType; + +/** + * Table format factory for providing configured instances of AWS Glue Schema + * Registry Avro to RowData {@link SerializationSchema} and + * {@link DeserializationSchema}. + */ +@Internal +public class GlueSchemaRegistryAvroFormatFactory implements DeserializationFormatFactory, SerializationFormatFactory { +public static final String IDENTIFIER = "avro-glue"; + +@Override +public DecodingFormat> createDecodingFormat(DynamicTableFactory.Context context, +ReadableConfig formatOptions) { +FactoryUtil.validateFactoryOptions(this, formatOptions); +final Map configMap = buildConfigMap(formatOptions); + +return new DecodingFormat>() { +@Override +public DeserializationSchema createRuntimeDecoder(DynamicTableSource.Context context, +DataType producedDataType) { +final RowType rowType = (RowType) producedDataType.getLogicalType(); +final TypeInformation rowDataTypeInfo = context.createTypeInformation(producedDataType); +return new AvroRowDataDeserializationSchema( +
[GitHub] [flink-docker] dawidwys commented on pull request #92: Add GPG key for 1.14.0 release
dawidwys commented on pull request #92: URL: https://github.com/apache/flink-docker/pull/92#issuecomment-929931539 Sorry for pushing to dev-14 directly, I did it by mistake. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] RocMarshal edited a comment on pull request #16962: [FLINK-15352][connector-jdbc] Develop MySQLCatalog to connect Flink with MySQL tables and ecosystem.
RocMarshal edited a comment on pull request #16962: URL: https://github.com/apache/flink/pull/16962#issuecomment-925446816 Hi, @MartijnVisser @Airblader @twalthr @zhuzhurk @JingsongLi , I made some changes based on your suggestions. Could you help me to review it ? Thank you so much for your attention. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14916: [FLINK-21345][Table SQL / Planner] Fix BUG of Union All join Temporal…
flinkbot edited a comment on pull request #14916: URL: https://github.com/apache/flink/pull/14916#issuecomment-776614443 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #16606: [FLINK-21357][runtime/statebackend]Periodic materialization for generalized incremental checkpoints
flinkbot edited a comment on pull request #16606: URL: https://github.com/apache/flink/pull/16606#issuecomment-887431748 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lirui-apache closed pull request #17245: [FLINK-23316][Table SQL/Ecosystem] Add tests for custom PartitionCommitPolicy
lirui-apache closed pull request #17245: URL: https://github.com/apache/flink/pull/17245 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #14544: [FLINK-20845] Drop Scala 2.11 support
flinkbot edited a comment on pull request #14544: URL: https://github.com/apache/flink/pull/14544#issuecomment-753633967 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] fapaul commented on a change in pull request #17363: [FLINK-24324][connectors/elasticsearch] Add Elasticsearch 7 sink based on FLIP-143
fapaul commented on a change in pull request #17363: URL: https://github.com/apache/flink/pull/17363#discussion_r718199624 ## File path: flink-connectors/flink-connector-elasticsearch7/src/main/java/org/apache/flink/connector/elasticsearch/sink/ElasticsearchSinkBuilder.java ## @@ -0,0 +1,276 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.connector.elasticsearch.sink; + +import org.apache.flink.annotation.PublicEvolving; +import org.apache.flink.connector.base.DeliveryGuarantee; +import org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase; +import org.apache.flink.util.InstantiationUtil; + +import org.apache.http.HttpHost; + +import java.util.Arrays; +import java.util.List; + +import static org.apache.flink.util.Preconditions.checkNotNull; +import static org.apache.flink.util.Preconditions.checkState; + +/** + * Builder to construct {@link ElasticsearchSink}. + * + * The following example shows the minimum setup to create a ElasticsearchSink that submits + * actions on checkpoint. + * + * {@code + * Elasticsearch sink = Elasticsearch + * .builder() + * .setHosts(MY_ELASTICSEARCH_HOSTS) + * .setProcessor(MY_ELASTICSEARCH_PROCESSOR) + * .setDeliveryGuarantee(DeliveryGuarantee.AT_LEAST_ONCE) + * .build(); + * } + * + * @param type of the records converted to Elasticsearch actions + */ +@PublicEvolving +public class ElasticsearchSinkBuilder { + +private int bulkFlushMaxActions = -1; +private int bulkFlushMaxMb = -1; +private long bulkFlushInterval = -1; +private ElasticsearchSinkBase.FlushBackoffType bulkFlushBackoffType; +private int bulkFlushBackoffRetries = -1; +private long bulkFlushBackOffDelay = -1; +private DeliveryGuarantee deliveryGuarantee = DeliveryGuarantee.NONE; +private List hosts; +private ElasticsearchProcessor processor; +private String username; +private String password; +private String connectionPathPrefix; + +ElasticsearchSinkBuilder() {} + +/** + * Sets the hosts where the Elasticsearch cluster nodes are reachable. + * + * @param hosts http addresses describing the node locations + */ +public ElasticsearchSinkBuilder setHosts(HttpHost... hosts) { +checkNotNull(hosts); +checkState(hosts.length > 0, "Hosts cannot be empty."); +this.hosts = Arrays.asList(hosts); +return this; +} + +/** + * Sets the processor which is invoked on every record to convert it to Elasticsearch actions. + * + * @param processor to process records into Elasticsearch actions. + * @return {@link ElasticsearchSinkBuilder} + */ +public ElasticsearchSinkBuilder setProcessor( +ElasticsearchProcessor processor) { +checkNotNull(processor); +checkState( +InstantiationUtil.isSerializable(processor), +"The elasticsearch processor must be serializable."); +final ElasticsearchSinkBuilder self = self(); +self.processor = processor; +return self; +} + +/** + * Sets the wanted the {@link DeliveryGuarantee}. The default delivery guarantee is {@link + * #deliveryGuarantee}. Review comment: Good that we have done the same thing in the Kafka docs :) ## File path: flink-connectors/flink-connector-elasticsearch7/pom.xml ## @@ -92,6 +92,12 @@ under the License. test + Review comment: Isn't it already part of the root pom by specifying the bom https://github.com/apache/flink/blob/4fe9f525a92319acc1e3434bebed601306f7a16f/pom.xml#L780 ? ## File path: flink-connectors/flink-connector-elasticsearch7/src/main/java/org/apache/flink/connector/elasticsearch/sink/BulkProcessorConfig.java ## @@ -0,0 +1,78 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this
[GitHub] [flink-docker] dawidwys merged pull request #92: Add GPG key for 1.14.0 release
dawidwys merged pull request #92: URL: https://github.com/apache/flink-docker/pull/92 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twalthr closed pull request #17341: [FLINK-24394][test] Refactor BuiltInFunctions IT Tests
twalthr closed pull request #17341: URL: https://github.com/apache/flink/pull/17341 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17378: [hotfix][docs] Move out the time zone page from streaming concepts section #17377
flinkbot edited a comment on pull request #17378: URL: https://github.com/apache/flink/pull/17378#issuecomment-929395351 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #17379: Update kafka.md
flinkbot edited a comment on pull request #17379: URL: https://github.com/apache/flink/pull/17379#issuecomment-929802829 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] ljdavns edited a comment on pull request #17325: [FLINK-24337] fix WEB UI build failure on windows
ljdavns edited a comment on pull request #17325: URL: https://github.com/apache/flink/pull/17325#issuecomment-929887095 > The PR title / commit message seem to reference the wrong JIRA issue? Sry, it's [FLINK-24337] -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] NickBurkard commented on pull request #14544: [FLINK-20845] Drop Scala 2.11 support
NickBurkard commented on pull request #14544: URL: https://github.com/apache/flink/pull/14544#issuecomment-930154870 I made the remaining changes @zentol, let me know if there's anything else. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] zentol commented on a change in pull request #14544: [FLINK-20845] Drop Scala 2.11 support
zentol commented on a change in pull request #14544: URL: https://github.com/apache/flink/pull/14544#discussion_r718431519 ## File path: flink-dist/src/main/resources/META-INF/NOTICE ## @@ -29,11 +29,11 @@ See bundled license files for details. The following dependencies all share the same BSD license which you find under licenses/LICENSE.scala. -- org.scala-lang:scala-compiler:2.11.12 -- org.scala-lang:scala-library:2.11.12 -- org.scala-lang:scala-reflect:2.11.12 -- org.scala-lang.modules:scala-parser-combinators_2.11:1.1.1 -- org.scala-lang.modules:scala-xml_2.11:1.0.5 +- org.scala-lang:scala-compiler:2.12.7 +- org.scala-lang:scala-library:2.12.7 +- org.scala-lang:scala-reflect:2.12.7 +- org.scala-lang.modules:scala-parser-combinators_2.12:1.1.1 +- org.scala-lang.modules:scala-xml_2.12:1.0.5 Review comment: ```suggestion - org.scala-lang.modules:scala-xml_2.12:1.0.6 ``` combinators is no longer a transtive dependency, and the correct scala-xml version is 1.0.6. ## File path: flink-rpc/flink-rpc-akka/pom.xml ## @@ -69,6 +69,11 @@ under the License. scala-compiler compile + + org.scala-lang.modules + scala-parser-combinators_${scala.binary.version} Review comment: please move this down into the `` section; that way we don't change the structure of the dependency tree, but just modify the version. (i.e., the dependency tree still shows where this dependency came from) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org