[GitHub] [flink] JingsongLi closed pull request #11953: [FLINK-16975][doc] Add docs for FileSystem connector
JingsongLi closed pull request #11953: URL: https://github.com/apache/flink/pull/11953 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] Jiayi-Liao commented on a change in pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel
Jiayi-Liao commented on a change in pull request #12261: URL: https://github.com/apache/flink/pull/12261#discussion_r427749670 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java ## @@ -181,6 +181,14 @@ void retriggerSubpartitionRequest(int subpartitionIndex) throws IOException { moreAvailable = !receivedBuffers.isEmpty(); } + if (next == null) { Review comment: I guess it's theoretically impossible that we get a null buffer here with your changes in `releaseAllResources`, which seems to solve two cases you mentioned in description. So.. this check is just for other unknown bad cases? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] gyfora commented on pull request #12252: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source
gyfora commented on pull request #12252: URL: https://github.com/apache/flink/pull/12252#issuecomment-631246336 Looks good +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] gyfora commented on pull request #12254: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source
gyfora commented on pull request #12254: URL: https://github.com/apache/flink/pull/12254#issuecomment-631245945 Looks good +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] JingsongLi commented on a change in pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction
JingsongLi commented on a change in pull request #12262: URL: https://github.com/apache/flink/pull/12262#discussion_r427749438 ## File path: flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunction.java ## @@ -330,6 +331,8 @@ public void setOperatorEventGateway(OperatorEventGateway eventGateway) { private DataOutputViewStreamWrapper outStream; private ServerThread() throws Exception { + // serializers are not thread safe + this.serializer = CollectSinkFunction.this.serializer.duplicate(); Review comment: It's too obscure. Can be `private ServerThread(TypeSerializer serializer)`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog
flinkbot edited a comment on pull request #12260: URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314 ## CI report: * 7820729185644e576dc8d9c9204f2879a193cba0 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1904) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17822: --- Fix Version/s: 1.11.0 > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8848295Z at >
[jira] [Updated] (FLINK-17814) Translate native kubernetes document to Chinese
[ https://issues.apache.org/jira/browse/FLINK-17814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu updated FLINK-17814: Component/s: Documentation chinese-translation > Translate native kubernetes document to Chinese > --- > > Key: FLINK-17814 > URL: https://issues.apache.org/jira/browse/FLINK-17814 > Project: Flink > Issue Type: Task > Components: chinese-translation, Documentation >Reporter: Yang Wang >Priority: Major > > [https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html] > > Translate the native kubernetes document to Chinese. > English updated in 7723774a0402e10bc914b1fa6128e3c80678dafe -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction
flinkbot commented on pull request #12262: URL: https://github.com/apache/flink/pull/12262#issuecomment-631243945 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 62091aabab937a2a802259aface1629fdac676b1 (Wed May 20 05:23:32 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! * **This pull request references an unassigned [Jira ticket](https://issues.apache.org/jira/browse/FLINK-17817).** According to the [code contribution guide](https://flink.apache.org/contributing/contribute-code.html), tickets need to be assigned before starting with the implementation work. Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17819) Yarn error unhelpful when forgetting HADOOP_CLASSPATH
[ https://issues.apache.org/jira/browse/FLINK-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17819: --- Labels: usability (was: ) > Yarn error unhelpful when forgetting HADOOP_CLASSPATH > - > > Key: FLINK-17819 > URL: https://issues.apache.org/jira/browse/FLINK-17819 > Project: Flink > Issue Type: Improvement > Components: Deployment / YARN >Reporter: Arvid Heise >Priority: Major > Labels: usability > > When running > {code:bash} > flink run -m yarn-cluster -yjm 1768 -ytm 50072 -ys 32 ... > {code} > without some export HADOOP_CLASSPATH, we get the unhelpful message > {noformat} > Could not build the program from JAR file: JAR file does not exist: -yjm > {noformat} > I'd expect something like > {noformat} > yarn-cluster can only be used with exported HADOOP_CLASSPATH, see for > more information{noformat} > > I suggest to load a stub for YarnCluster deployment if the actual > implementation fails to load, which prints this error when used. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out
[ https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111788#comment-17111788 ] Robert Metzger commented on FLINK-17730: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1888=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=eb5f4d19-2d2d-5856-a4ce-acf5f904a994 > HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart > times out > > > Key: FLINK-17730 > URL: https://issues.apache.org/jira/browse/FLINK-17730 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, FileSystems, Tests >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.12.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8 > After 5 minutes > {code} > 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 > tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000] > 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE > 2020-05-15T06:56:38.1689028Z at > java.net.SocketInputStream.socketRead0(Native Method) > 2020-05-15T06:56:38.1689496Z at > java.net.SocketInputStream.socketRead(SocketInputStream.java:116) > 2020-05-15T06:56:38.1689921Z at > java.net.SocketInputStream.read(SocketInputStream.java:171) > 2020-05-15T06:56:38.1690316Z at > java.net.SocketInputStream.read(SocketInputStream.java:141) > 2020-05-15T06:56:38.1690723Z at > sun.security.ssl.InputRecord.readFully(InputRecord.java:465) > 2020-05-15T06:56:38.1691196Z at > sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593) > 2020-05-15T06:56:38.1691608Z at > sun.security.ssl.InputRecord.read(InputRecord.java:532) > 2020-05-15T06:56:38.1692023Z at > sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975) > 2020-05-15T06:56:38.1692558Z - locked <0xb94644f8> (a > java.lang.Object) > 2020-05-15T06:56:38.1692946Z at > sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933) > 2020-05-15T06:56:38.1693371Z at > sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > 2020-05-15T06:56:38.1694151Z - locked <0xb9464d20> (a > sun.security.ssl.AppInputStream) > 2020-05-15T06:56:38.1694908Z at > org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > 2020-05-15T06:56:38.1695475Z at > org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > 2020-05-15T06:56:38.1696007Z at > org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > 2020-05-15T06:56:38.1696509Z at > org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > 2020-05-15T06:56:38.1696993Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1697466Z at > com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > 2020-05-15T06:56:38.1698069Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1698567Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1699041Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1699624Z at > com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > 2020-05-15T06:56:38.1700090Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1700584Z at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > 2020-05-15T06:56:38.1701282Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1701800Z at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > 2020-05-15T06:56:38.1702328Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1702804Z at > org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445) > 2020-05-15T06:56:38.1703270Z at > org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown > Source) > 2020-05-15T06:56:38.1703677Z at > org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) > 2020-05-15T06:56:38.1704090Z at > org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260) > 2020-05-15T06:56:38.1704607Z at > org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source) > 2020-05-15T06:56:38.1705115Z at >
[jira] [Reopened] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out
[ https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger reopened FLINK-17730: > HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart > times out > > > Key: FLINK-17730 > URL: https://issues.apache.org/jira/browse/FLINK-17730 > Project: Flink > Issue Type: Bug > Components: Build System / Azure Pipelines, FileSystems, Tests >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.12.0 > > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8 > After 5 minutes > {code} > 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 > tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000] > 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE > 2020-05-15T06:56:38.1689028Z at > java.net.SocketInputStream.socketRead0(Native Method) > 2020-05-15T06:56:38.1689496Z at > java.net.SocketInputStream.socketRead(SocketInputStream.java:116) > 2020-05-15T06:56:38.1689921Z at > java.net.SocketInputStream.read(SocketInputStream.java:171) > 2020-05-15T06:56:38.1690316Z at > java.net.SocketInputStream.read(SocketInputStream.java:141) > 2020-05-15T06:56:38.1690723Z at > sun.security.ssl.InputRecord.readFully(InputRecord.java:465) > 2020-05-15T06:56:38.1691196Z at > sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593) > 2020-05-15T06:56:38.1691608Z at > sun.security.ssl.InputRecord.read(InputRecord.java:532) > 2020-05-15T06:56:38.1692023Z at > sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975) > 2020-05-15T06:56:38.1692558Z - locked <0xb94644f8> (a > java.lang.Object) > 2020-05-15T06:56:38.1692946Z at > sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933) > 2020-05-15T06:56:38.1693371Z at > sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > 2020-05-15T06:56:38.1694151Z - locked <0xb9464d20> (a > sun.security.ssl.AppInputStream) > 2020-05-15T06:56:38.1694908Z at > org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > 2020-05-15T06:56:38.1695475Z at > org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > 2020-05-15T06:56:38.1696007Z at > org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > 2020-05-15T06:56:38.1696509Z at > org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > 2020-05-15T06:56:38.1696993Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1697466Z at > com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > 2020-05-15T06:56:38.1698069Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1698567Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1699041Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1699624Z at > com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > 2020-05-15T06:56:38.1700090Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1700584Z at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > 2020-05-15T06:56:38.1701282Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1701800Z at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > 2020-05-15T06:56:38.1702328Z at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90) > 2020-05-15T06:56:38.1702804Z at > org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445) > 2020-05-15T06:56:38.1703270Z at > org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown > Source) > 2020-05-15T06:56:38.1703677Z at > org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109) > 2020-05-15T06:56:38.1704090Z at > org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260) > 2020-05-15T06:56:38.1704607Z at > org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source) > 2020-05-15T06:56:38.1705115Z at > org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317) > 2020-05-15T06:56:38.1705551Z at > org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256) > 2020-05-15T06:56:38.1705937Z at >
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger updated FLINK-17822: --- Priority: Blocker (was: Major) > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Blocker > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8848295Z at > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[jira] [Updated] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
[ https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17817: --- Labels: pull-request-available test-stability (was: test-stability) > CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase > - > > Key: FLINK-17817 > URL: https://issues.apache.org/jira/browse/FLINK-17817 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: pull-request-available, test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-19T10:34:18.3224679Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 7.537 s <<< ERROR! > 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next > result > 2020-05-19T10:34:18.3227634Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92) > 2020-05-19T10:34:18.3228518Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63) > 2020-05-19T10:34:18.3229170Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361) > 2020-05-19T10:34:18.3229863Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160) > 2020-05-19T10:34:18.3230586Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > 2020-05-19T10:34:18.3231303Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141) > 2020-05-19T10:34:18.3231996Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107) > 2020-05-19T10:34:18.3232847Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176) > 2020-05-19T10:34:18.3233694Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122) > 2020-05-19T10:34:18.3234461Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T10:34:18.3234983Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T10:34:18.3235632Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T10:34:18.3236615Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T10:34:18.3237256Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T10:34:18.3237965Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T10:34:18.3238750Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T10:34:18.3239314Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T10:34:18.3239838Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T10:34:18.3240362Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T10:34:18.3240803Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T10:34:18.3243624Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T10:34:18.3244531Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T10:34:18.3245325Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T10:34:18.3246086Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T10:34:18.3246765Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T10:34:18.3247390Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T10:34:18.3248012Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T10:34:18.3248779Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T10:34:18.3249417Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T10:34:18.3250357Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-19T10:34:18.3251021Z at >
[GitHub] [flink] TsReaper opened a new pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction
TsReaper opened a new pull request #12262: URL: https://github.com/apache/flink/pull/12262 ## What is the purpose of the change This is a hot fix for `CollectSinkFunction`. `TypeSerializer`s are not thread safe but currently `CollectSinkFunction` reuses them among two threads. This PR fixes this problem. ## Brief change log - Fix serializer thread safe problem in CollectSinkFunction ## Verifying this change This change is already covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no - If yes, how is the feature documented? not applicable This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
[ https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111784#comment-17111784 ] Yang Wang commented on FLINK-17745: --- [~Echo Lee] So this is not a problem and we could directly close this ticket. Right? As a follow-up, maybe we need to supplement the document for the structure of fat jar. > PackagedProgram' extractedTempLibraries and jarfiles may be duplicate > - > > Key: FLINK-17745 > URL: https://issues.apache.org/jira/browse/FLINK-17745 > Project: Flink > Issue Type: Improvement > Components: Client / Job Submission >Reporter: Echo Lee >Assignee: Kostas Kloudas >Priority: Major > Labels: pull-request-available > > When i submit a flink app with a fat jar, PackagedProgram will extracted temp > libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars > contains fat jar and temp libraries. I don't think we should add fat jar to > the pipeline.jars if extractedTempLibraries is not empty. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-17824) "Resuming Savepoint" e2e stalls indefinitely
Robert Metzger created FLINK-17824: -- Summary: "Resuming Savepoint" e2e stalls indefinitely Key: FLINK-17824 URL: https://issues.apache.org/jira/browse/FLINK-17824 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing, Tests Reporter: Robert Metzger CI; https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=94459a52-42b6-5bfc-5d74-690b5d3c6de8 {code} 2020-05-19T21:05:52.9696236Z == 2020-05-19T21:05:52.9696860Z Running 'Resuming Savepoint (file, async, scale down) end-to-end test' 2020-05-19T21:05:52.9697243Z == 2020-05-19T21:05:52.9713094Z TEST_DATA_DIR: /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-52970362751 2020-05-19T21:05:53.1194478Z Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT 2020-05-19T21:05:53.2180375Z Starting cluster. 2020-05-19T21:05:53.9986167Z Starting standalonesession daemon on host fv-az558. 2020-05-19T21:05:55.5997224Z Starting taskexecutor daemon on host fv-az558. 2020-05-19T21:05:55.6223837Z Waiting for Dispatcher REST endpoint to come up... 2020-05-19T21:05:57.0552482Z Waiting for Dispatcher REST endpoint to come up... 2020-05-19T21:05:57.9446865Z Waiting for Dispatcher REST endpoint to come up... 2020-05-19T21:05:59.0098434Z Waiting for Dispatcher REST endpoint to come up... 2020-05-19T21:06:00.0569710Z Dispatcher REST endpoint is up. 2020-05-19T21:06:07.7099937Z Job (a92a74de8446a80403798bb4806b73f3) is running. 2020-05-19T21:06:07.7855906Z Waiting for job to process up to 200 records, current progress: 114 records ... 2020-05-19T21:06:55.5755111Z 2020-05-19T21:06:55.5756550Z 2020-05-19T21:06:55.5757225Z The program finished with the following exception: 2020-05-19T21:06:55.5757566Z 2020-05-19T21:06:55.5765453Z org.apache.flink.util.FlinkException: Could not stop with a savepoint job "a92a74de8446a80403798bb4806b73f3". 2020-05-19T21:06:55.5766873Zat org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:485) 2020-05-19T21:06:55.5767980Zat org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:854) 2020-05-19T21:06:55.5769014Zat org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:477) 2020-05-19T21:06:55.5770052Zat org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:921) 2020-05-19T21:06:55.5771107Zat org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982) 2020-05-19T21:06:55.5772223Zat org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) 2020-05-19T21:06:55.5773325Zat org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982) 2020-05-19T21:06:55.5774871Z Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint Coordinator is suspending. 2020-05-19T21:06:55.5777183Zat java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 2020-05-19T21:06:55.5778884Zat java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) 2020-05-19T21:06:55.5779920Zat org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:483) 2020-05-19T21:06:55.5781175Z... 6 more 2020-05-19T21:06:55.5782391Z Caused by: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint Coordinator is suspending. 2020-05-19T21:06:55.5783885Zat org.apache.flink.runtime.scheduler.SchedulerBase.lambda$stopWithSavepoint$9(SchedulerBase.java:890) 2020-05-19T21:06:55.5784992Zat java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) 2020-05-19T21:06:55.5786492Zat java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811) 2020-05-19T21:06:55.5787601Zat java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) 2020-05-19T21:06:55.5788682Zat org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402) 2020-05-19T21:06:55.5790308Zat org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195) 2020-05-19T21:06:55.5791664Zat org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74) 2020-05-19T21:06:55.5792767Zat org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152) 2020-05-19T21:06:55.5793756Zat
[jira] [Assigned] (FLINK-17824) "Resuming Savepoint" e2e stalls indefinitely
[ https://issues.apache.org/jira/browse/FLINK-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Metzger reassigned FLINK-17824: -- Assignee: Robert Metzger > "Resuming Savepoint" e2e stalls indefinitely > - > > Key: FLINK-17824 > URL: https://issues.apache.org/jira/browse/FLINK-17824 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing, Tests >Reporter: Robert Metzger >Assignee: Robert Metzger >Priority: Major > Labels: test-stability > > CI; > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=94459a52-42b6-5bfc-5d74-690b5d3c6de8 > {code} > 2020-05-19T21:05:52.9696236Z > == > 2020-05-19T21:05:52.9696860Z Running 'Resuming Savepoint (file, async, scale > down) end-to-end test' > 2020-05-19T21:05:52.9697243Z > == > 2020-05-19T21:05:52.9713094Z TEST_DATA_DIR: > /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-52970362751 > 2020-05-19T21:05:53.1194478Z Flink dist directory: > /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT > 2020-05-19T21:05:53.2180375Z Starting cluster. > 2020-05-19T21:05:53.9986167Z Starting standalonesession daemon on host > fv-az558. > 2020-05-19T21:05:55.5997224Z Starting taskexecutor daemon on host fv-az558. > 2020-05-19T21:05:55.6223837Z Waiting for Dispatcher REST endpoint to come > up... > 2020-05-19T21:05:57.0552482Z Waiting for Dispatcher REST endpoint to come > up... > 2020-05-19T21:05:57.9446865Z Waiting for Dispatcher REST endpoint to come > up... > 2020-05-19T21:05:59.0098434Z Waiting for Dispatcher REST endpoint to come > up... > 2020-05-19T21:06:00.0569710Z Dispatcher REST endpoint is up. > 2020-05-19T21:06:07.7099937Z Job (a92a74de8446a80403798bb4806b73f3) is > running. > 2020-05-19T21:06:07.7855906Z Waiting for job to process up to 200 records, > current progress: 114 records ... > 2020-05-19T21:06:55.5755111Z > 2020-05-19T21:06:55.5756550Z > > 2020-05-19T21:06:55.5757225Z The program finished with the following > exception: > 2020-05-19T21:06:55.5757566Z > 2020-05-19T21:06:55.5765453Z org.apache.flink.util.FlinkException: Could not > stop with a savepoint job "a92a74de8446a80403798bb4806b73f3". > 2020-05-19T21:06:55.5766873Z at > org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:485) > 2020-05-19T21:06:55.5767980Z at > org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:854) > 2020-05-19T21:06:55.5769014Z at > org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:477) > 2020-05-19T21:06:55.5770052Z at > org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:921) > 2020-05-19T21:06:55.5771107Z at > org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982) > 2020-05-19T21:06:55.5772223Z at > org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) > 2020-05-19T21:06:55.5773325Z at > org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982) > 2020-05-19T21:06:55.5774871Z Caused by: > java.util.concurrent.ExecutionException: > java.util.concurrent.CompletionException: > java.util.concurrent.CompletionException: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > Coordinator is suspending. > 2020-05-19T21:06:55.5777183Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-19T21:06:55.5778884Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) > 2020-05-19T21:06:55.5779920Z at > org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:483) > 2020-05-19T21:06:55.5781175Z ... 6 more > 2020-05-19T21:06:55.5782391Z Caused by: > java.util.concurrent.CompletionException: > java.util.concurrent.CompletionException: > org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint > Coordinator is suspending. > 2020-05-19T21:06:55.5783885Z at > org.apache.flink.runtime.scheduler.SchedulerBase.lambda$stopWithSavepoint$9(SchedulerBase.java:890) > 2020-05-19T21:06:55.5784992Z at > java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836) > 2020-05-19T21:06:55.5786492Z at > java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811) > 2020-05-19T21:06:55.5787601Z at > java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456) > 2020-05-19T21:06:55.5788682Z at >
[jira] [Closed] (FLINK-17821) Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP
[ https://issues.apache.org/jira/browse/FLINK-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhu Zhu closed FLINK-17821. --- Resolution: Duplicate > Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP > > > Key: FLINK-17821 > URL: https://issues.apache.org/jira/browse/FLINK-17821 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.12.0 >Reporter: Zhu Zhu >Priority: Critical > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1871=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8=12032 > 2020-05-19T16:29:40.7239430Z Test testKafkaSourceSink[legacy = false, topicId > = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) > failed with: > 2020-05-19T16:29:40.7240291Z java.util.concurrent.ExecutionException: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-19T16:29:40.7241033Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-19T16:29:40.7241542Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2020-05-19T16:29:40.7242127Z at > org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31) > 2020-05-19T16:29:40.7242729Z at > org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala) > 2020-05-19T16:29:40.7243239Z at > org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145) > 2020-05-19T16:29:40.7243691Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T16:29:40.7244273Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T16:29:40.7244729Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T16:29:40.7245117Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T16:29:40.7245515Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T16:29:40.7245956Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T16:29:40.7246419Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T16:29:40.7246870Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T16:29:40.7247287Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T16:29:40.7251320Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T16:29:40.7251833Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T16:29:40.7252251Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T16:29:40.7252716Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T16:29:40.7253117Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T16:29:40.7253502Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T16:29:40.7254041Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T16:29:40.7254528Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T16:29:40.7255500Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T16:29:40.7256064Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-19T16:29:40.7256438Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-19T16:29:40.7256758Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-19T16:29:40.7257118Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T16:29:40.7257486Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T16:29:40.7257885Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T16:29:40.7258389Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T16:29:40.7258821Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T16:29:40.7259219Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T16:29:40.7259664Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T16:29:40.7260098Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-19T16:29:40.7260635Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) >
[jira] [Commented] (FLINK-17821) Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP
[ https://issues.apache.org/jira/browse/FLINK-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111779#comment-17111779 ] Zhu Zhu commented on FLINK-17821: - [~wanglijie95] yes, it's the same root cause. Thanks for the information! > Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP > > > Key: FLINK-17821 > URL: https://issues.apache.org/jira/browse/FLINK-17821 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.12.0 >Reporter: Zhu Zhu >Priority: Critical > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1871=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8=12032 > 2020-05-19T16:29:40.7239430Z Test testKafkaSourceSink[legacy = false, topicId > = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) > failed with: > 2020-05-19T16:29:40.7240291Z java.util.concurrent.ExecutionException: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-19T16:29:40.7241033Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-19T16:29:40.7241542Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2020-05-19T16:29:40.7242127Z at > org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31) > 2020-05-19T16:29:40.7242729Z at > org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala) > 2020-05-19T16:29:40.7243239Z at > org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145) > 2020-05-19T16:29:40.7243691Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T16:29:40.7244273Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T16:29:40.7244729Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T16:29:40.7245117Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T16:29:40.7245515Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T16:29:40.7245956Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T16:29:40.7246419Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T16:29:40.7246870Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T16:29:40.7247287Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T16:29:40.7251320Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T16:29:40.7251833Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T16:29:40.7252251Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T16:29:40.7252716Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T16:29:40.7253117Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T16:29:40.7253502Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T16:29:40.7254041Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T16:29:40.7254528Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T16:29:40.7255500Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T16:29:40.7256064Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-19T16:29:40.7256438Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-19T16:29:40.7256758Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-19T16:29:40.7257118Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T16:29:40.7257486Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T16:29:40.7257885Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T16:29:40.7258389Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T16:29:40.7258821Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T16:29:40.7259219Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T16:29:40.7259664Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T16:29:40.7260098Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) >
[GitHub] [flink] flinkbot edited a comment on pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel
flinkbot edited a comment on pull request #12261: URL: https://github.com/apache/flink/pull/12261#issuecomment-631229356 ## CI report: * 26afeb03aa30f84994a8aa85ca2d223d44672067 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1905) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.
flinkbot edited a comment on pull request #12181: URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595 ## CI report: * 0bf2aa2f54e22e76fed071e3c614139d4d187fc4 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1860) * bd9add8e480455265ca95b863601f6608918b334 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog
flinkbot edited a comment on pull request #12260: URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314 ## CI report: * 7820729185644e576dc8d9c9204f2879a193cba0 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1904) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] twentyworld commented on pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…
twentyworld commented on pull request #12237: URL: https://github.com/apache/flink/pull/12237#issuecomment-631234411 谢谢,这里的很多指引,都是我翻译之前所不知道的。 我会把各位的comments都重新整理一下,同时根据指引文档重新走一遍,通过build这些文档测试一下。 如果有任何疑问,我会提出问题并寻求解答的。 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog
flinkbot commented on pull request #12260: URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314 ## CI report: * 7820729185644e576dc8d9c9204f2879a193cba0 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel
flinkbot commented on pull request #12261: URL: https://github.com/apache/flink/pull/12261#issuecomment-631229356 ## CI report: * 26afeb03aa30f84994a8aa85ca2d223d44672067 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default
flinkbot edited a comment on pull request #12240: URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048 ## CI report: * fc462938ff28feca6fd689f6e51e1fca79efe975 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1901) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #11175: [FLINK-16197][hive] Failed to query partitioned table when partition …
flinkbot edited a comment on pull request #11175: URL: https://github.com/apache/flink/pull/11175#issuecomment-589671100 ## CI report: * 7cf8bc2371f60ce02daec08bda96b30e8ab94a32 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1900) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
[ https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111754#comment-17111754 ] Caizhi Weng commented on FLINK-17817: - Thanks for the report. This is because type serializers are not thread safe but I didn't duplicate it in the sink function. I'll fix this immediately. > CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase > - > > Key: FLINK-17817 > URL: https://issues.apache.org/jira/browse/FLINK-17817 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-19T10:34:18.3224679Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 7.537 s <<< ERROR! > 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next > result > 2020-05-19T10:34:18.3227634Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92) > 2020-05-19T10:34:18.3228518Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63) > 2020-05-19T10:34:18.3229170Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361) > 2020-05-19T10:34:18.3229863Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160) > 2020-05-19T10:34:18.3230586Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > 2020-05-19T10:34:18.3231303Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141) > 2020-05-19T10:34:18.3231996Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107) > 2020-05-19T10:34:18.3232847Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176) > 2020-05-19T10:34:18.3233694Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122) > 2020-05-19T10:34:18.3234461Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T10:34:18.3234983Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T10:34:18.3235632Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T10:34:18.3236615Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T10:34:18.3237256Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T10:34:18.3237965Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T10:34:18.3238750Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T10:34:18.3239314Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T10:34:18.3239838Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T10:34:18.3240362Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T10:34:18.3240803Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T10:34:18.3243624Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T10:34:18.3244531Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T10:34:18.3245325Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T10:34:18.3246086Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T10:34:18.3246765Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T10:34:18.3247390Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T10:34:18.3248012Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T10:34:18.3248779Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T10:34:18.3249417Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T10:34:18.3250357Z at >
[GitHub] [flink] flinkbot commented on pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel
flinkbot commented on pull request #12261: URL: https://github.com/apache/flink/pull/12261#issuecomment-631228177 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 26afeb03aa30f84994a8aa85ca2d223d44672067 (Wed May 20 04:25:40 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17823) Resolve the race condition while releasing RemoteInputChannel
[ https://issues.apache.org/jira/browse/FLINK-17823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17823: --- Labels: pull-request-available (was: ) > Resolve the race condition while releasing RemoteInputChannel > - > > Key: FLINK-17823 > URL: https://issues.apache.org/jira/browse/FLINK-17823 > Project: Flink > Issue Type: Bug > Components: Runtime / Network >Affects Versions: 1.11.0 >Reporter: Zhijiang >Assignee: Zhijiang >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0 > > > RemoteInputChannel#releaseAllResources might be called by canceler thread. > Meanwhile, the task thread can also call RemoteInputChannel#getNextBuffer. > There probably cause two potential problems: > * Task thread might get null buffer after canceler thread already released > all the buffers, then it might cause misleading NPE in getNextBuffer. > * Task thread and canceler thread might pull the same buffer concurrently, > which causes unexpected exception when the same buffer is recycled twice. > The solution is to properly synchronize the buffer queue in release method to > avoid the same buffer pulled by both canceler thread and task thread. And in > getNextBuffer method, we add some explicit checks to avoid misleading NPE and > hint some valid exceptions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] zhijiangW opened a new pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel
zhijiangW opened a new pull request #12261: URL: https://github.com/apache/flink/pull/12261 ## What is the purpose of the change RemoteInputChannel#releaseAllResources might be called by canceler thread. Meanwhile, the task thread can also call RemoteInputChannel#getNextBuffer. There probably cause two potential problems: 1. Task thread might get null buffer after canceler thread already released all the buffers, then it might cause misleading NPE in getNextBuffer. 2. Task thread and canceler thread might pull the same buffer concurrently, which causes unexpected exception when the same buffer is recycled twice. The solution is to properly synchronize the buffer queue in release method to avoid the same buffer pulled by both canceler thread and task thread. And in getNextBuffer method, we add some explicit checks to avoid misleading NPE and hint some valid exceptions. ## Brief change log - Fix the synchronized `receivedBuffers` in `RemoteInputChannel#releaseAllResources` - check the released state and give proper exceptions in `RemoteInputChannel#getNextBuffer` ## Verifying this change New unit test in `RemoteInputChannelTest#testConcurrentGetNextBufferAndRelease ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-17823) Resolve the race condition while releasing RemoteInputChannel
Zhijiang created FLINK-17823: Summary: Resolve the race condition while releasing RemoteInputChannel Key: FLINK-17823 URL: https://issues.apache.org/jira/browse/FLINK-17823 Project: Flink Issue Type: Bug Components: Runtime / Network Affects Versions: 1.11.0 Reporter: Zhijiang Assignee: Zhijiang Fix For: 1.11.0 RemoteInputChannel#releaseAllResources might be called by canceler thread. Meanwhile, the task thread can also call RemoteInputChannel#getNextBuffer. There probably cause two potential problems: * Task thread might get null buffer after canceler thread already released all the buffers, then it might cause misleading NPE in getNextBuffer. * Task thread and canceler thread might pull the same buffer concurrently, which causes unexpected exception when the same buffer is recycled twice. The solution is to properly synchronize the buffer queue in release method to avoid the same buffer pulled by both canceler thread and task thread. And in getNextBuffer method, we add some explicit checks to avoid misleading NPE and hint some valid exceptions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong commented on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints
wuchong commented on pull request #11906: URL: https://github.com/apache/flink/pull/11906#issuecomment-631225241 Thanks @fpompermaier , it looks good to me in general. I added an IT case to verify group by query can be inserted into a primary keyed postgres catalog table (this is the purpose of FLINK-17762). Besides, I slightly updated the `getPrimaryKey` to make it return an optional constraint instead of nullable constraint. I hope that's ok. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
flinkbot edited a comment on pull request #12230: URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457 ## CI report: * 5b8eb4accc4478106f3e842ba18a1abc11194a43 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1845) * 2f0ca570ff878cd12f999570590a08fa75efcc6b Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1903) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default
flinkbot edited a comment on pull request #12240: URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048 ## CI report: * 7ae117dbf4d94f345f70d6f1e8cec97f71086a36 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1820) * fc462938ff28feca6fd689f6e51e1fca79efe975 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1901) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12246: [FLINK-17303][python] Return TableResult for Python TableEnvironment
flinkbot edited a comment on pull request #12246: URL: https://github.com/apache/flink/pull/12246#issuecomment-630803193 ## CI report: * 911e459fe53b61aa74ce3bc3d0761651eb7f61fb Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1893) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog
flinkbot commented on pull request #12260: URL: https://github.com/apache/flink/pull/12260#issuecomment-631223911 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 7820729185644e576dc8d9c9204f2879a193cba0 (Wed May 20 04:08:12 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog
[ https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-17189: --- Labels: pull-request-available (was: ) > Table with processing time attribute can not be read from Hive catalog > -- > > Key: FLINK-17189 > URL: https://issues.apache.org/jira/browse/FLINK-17189 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem, Table SQL / Planner >Affects Versions: 1.10.1 >Reporter: Timo Walther >Assignee: Jingsong Lee >Priority: Blocker > Labels: pull-request-available > Fix For: 1.11.0, 1.10.2 > > > DDL: > {code} > CREATE TABLE PROD_LINEITEM ( > L_ORDERKEY INTEGER, > L_PARTKEYINTEGER, > L_SUPPKEYINTEGER, > L_LINENUMBER INTEGER, > L_QUANTITY DOUBLE, > L_EXTENDEDPRICE DOUBLE, > L_DISCOUNT DOUBLE, > L_TAXDOUBLE, > L_CURRENCY STRING, > L_RETURNFLAG STRING, > L_LINESTATUS STRING, > L_ORDERTIME TIMESTAMP(3), > L_SHIPINSTRUCT STRING, > L_SHIPMODE STRING, > L_COMMENTSTRING, > WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE, > L_PROCTIME AS PROCTIME() > ) WITH ( > 'connector.type' = 'kafka', > 'connector.version' = 'universal', > 'connector.topic' = 'Lineitem', > 'connector.properties.zookeeper.connect' = 'not-needed', > 'connector.properties.bootstrap.servers' = 'kafka:9092', > 'connector.startup-mode' = 'earliest-offset', > 'format.type' = 'csv', > 'format.field-delimiter' = '|' > ); > {code} > Query: > {code} > SELECT * FROM prod_lineitem; > {code} > Result: > {code} > [ERROR] Could not execute SQL statement. Reason: > java.lang.AssertionError: Conversion to relational algebra failed to preserve > datatypes: > validated type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL > L_PROCTIME) NOT NULL > converted type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME > ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL > rel: > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[$15]) > LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, > 30:INTERVAL MINUTE)]) > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[PROCTIME()]) > LogicalTableScan(table=[[hcat, default, prod_lineitem, source: > [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, > L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, > L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]]) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] JingsongLi opened a new pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog
JingsongLi opened a new pull request #12260: URL: https://github.com/apache/flink/pull/12260 ## What is the purpose of the change ``` CREATE TABLE PROD_LINEITEM ( ... L_ORDERTIME TIMESTAMP(3), WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE, L_PROCTIME AS PROCTIME() ) WITH (...) SELECT * FROM prod_lineitem; ``` Failed by `AssertionError: Conversion to relational algebra failed to preserve datatypes`. ## Brief change log `TableSourceUtil.getSourceRowType` should not only adjust rowtime from watermarkSpec, but also adjust proctime fields from computed column. ## Verifying this change `HiveCatalogITCase.testReadWriteCsvWithProctime` ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): no - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no - The serializers: no - The runtime per-record code paths (performance sensitive): no - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no - The S3 file system connector: no ## Documentation - Does this pull request introduce a new feature? no This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-12030) KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not host this topic-partition
[ https://issues.apache.org/jira/browse/FLINK-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiangjie Qin resolved FLINK-12030. -- Resolution: Fixed Patch merged. master: 51a0d42ade8ee3789036ac1ee7c121133b58212a release-1.11: 0f072234d5cd30879b4e4845e69bee1a03cf1817 > KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not > host this topic-partition > --- > > Key: FLINK-12030 > URL: https://issues.apache.org/jira/browse/FLINK-12030 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka, Tests >Affects Versions: 1.11.0 >Reporter: Aljoscha Krettek >Assignee: Jiangjie Qin >Priority: Critical > Labels: pull-request-available, test-stability > Fix For: 1.11.0 > > > This is a relevant part from the log: > {code} > 14:11:45,305 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase >- > > Test > testMetricsAndEndOfStream(org.apache.flink.streaming.connectors.kafka.KafkaITCase) > is running. > > 14:11:45,310 INFO org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- > === > == Writing sequence of 300 into testEndOfStream with p=1 > === > 14:11:45,311 INFO org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- Writing attempt #1 > 14:11:45,316 INFO > org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl - > Creating topic testEndOfStream-1 > 14:11:45,863 WARN > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Property > [transaction.timeout.ms] not specified. Setting it to 360 ms > 14:11:45,910 WARN > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Using > AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE > semantic. > 14:11:45,921 INFO > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer - Starting > FlinkKafkaInternalProducer (1/1) to produce into default topic > testEndOfStream-1 > 14:11:46,006 ERROR org.apache.flink.streaming.connectors.kafka.KafkaTestBase >- Write attempt failed, trying again > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) > at > org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:638) > at > org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:79) > at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.writeSequence(KafkaConsumerTestBase.java:1918) > at > org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runEndOfStreamTest(KafkaConsumerTestBase.java:1537) > at > org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMetricsAndEndOfStream(KafkaITCase.java:136) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: > Failed to send data to Kafka: This server does not host this topic-partition. > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.checkErroneous(FlinkKafkaProducer.java:1002) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.flush(FlinkKafkaProducer.java:787) > at > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.close(FlinkKafkaProducer.java:658) > at > org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43) > at >
[jira] [Updated] (FLINK-15303) support predicate pushdown for sources in hive connector
[ https://issues.apache.org/jira/browse/FLINK-15303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chen updated FLINK-15303: --- Fix Version/s: (was: 1.11.0) > support predicate pushdown for sources in hive connector > - > > Key: FLINK-15303 > URL: https://issues.apache.org/jira/browse/FLINK-15303 > Project: Flink > Issue Type: New Feature > Components: Connectors / Hive >Reporter: Bowen Li >Assignee: Jingsong Lee >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] becketqin commented on pull request #12255: [FLINK-12030][connector/kafka] Check the topic existence after topic creation using KafkaConsumer
becketqin commented on pull request #12255: URL: https://github.com/apache/flink/pull/12255#issuecomment-631223120 Patch merged. master: 51a0d42ade8ee3789036ac1ee7c121133b58212a release-1.11: 0f072234d5cd30879b4e4845e69bee1a03cf1817 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] becketqin closed pull request #12255: [FLINK-12030][connector/kafka] Check the topic existence after topic creation using KafkaConsumer
becketqin closed pull request #12255: URL: https://github.com/apache/flink/pull/12255 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog
[ https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111731#comment-17111731 ] Jingsong Lee commented on FLINK-17189: -- {{TableSourceUtil.getSourceRowType}} should not only adjust rowtime, but also adjust proctime fields. I will create a PR for fixing. > Table with processing time attribute can not be read from Hive catalog > -- > > Key: FLINK-17189 > URL: https://issues.apache.org/jira/browse/FLINK-17189 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem, Table SQL / Planner >Affects Versions: 1.10.1 >Reporter: Timo Walther >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0, 1.10.2 > > > DDL: > {code} > CREATE TABLE PROD_LINEITEM ( > L_ORDERKEY INTEGER, > L_PARTKEYINTEGER, > L_SUPPKEYINTEGER, > L_LINENUMBER INTEGER, > L_QUANTITY DOUBLE, > L_EXTENDEDPRICE DOUBLE, > L_DISCOUNT DOUBLE, > L_TAXDOUBLE, > L_CURRENCY STRING, > L_RETURNFLAG STRING, > L_LINESTATUS STRING, > L_ORDERTIME TIMESTAMP(3), > L_SHIPINSTRUCT STRING, > L_SHIPMODE STRING, > L_COMMENTSTRING, > WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE, > L_PROCTIME AS PROCTIME() > ) WITH ( > 'connector.type' = 'kafka', > 'connector.version' = 'universal', > 'connector.topic' = 'Lineitem', > 'connector.properties.zookeeper.connect' = 'not-needed', > 'connector.properties.bootstrap.servers' = 'kafka:9092', > 'connector.startup-mode' = 'earliest-offset', > 'format.type' = 'csv', > 'format.field-delimiter' = '|' > ); > {code} > Query: > {code} > SELECT * FROM prod_lineitem; > {code} > Result: > {code} > [ERROR] Could not execute SQL statement. Reason: > java.lang.AssertionError: Conversion to relational algebra failed to preserve > datatypes: > validated type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL > L_PROCTIME) NOT NULL > converted type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME > ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL > rel: > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[$15]) > LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, > 30:INTERVAL MINUTE)]) > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[PROCTIME()]) > LogicalTableScan(table=[[hcat, default, prod_lineitem, source: > [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, > L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, > L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]]) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog
[ https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-17189: - Affects Version/s: 1.10.1 > Table with processing time attribute can not be read from Hive catalog > -- > > Key: FLINK-17189 > URL: https://issues.apache.org/jira/browse/FLINK-17189 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem, Table SQL / Planner >Affects Versions: 1.10.1 >Reporter: Timo Walther >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0, 1.10.2 > > > DDL: > {code} > CREATE TABLE PROD_LINEITEM ( > L_ORDERKEY INTEGER, > L_PARTKEYINTEGER, > L_SUPPKEYINTEGER, > L_LINENUMBER INTEGER, > L_QUANTITY DOUBLE, > L_EXTENDEDPRICE DOUBLE, > L_DISCOUNT DOUBLE, > L_TAXDOUBLE, > L_CURRENCY STRING, > L_RETURNFLAG STRING, > L_LINESTATUS STRING, > L_ORDERTIME TIMESTAMP(3), > L_SHIPINSTRUCT STRING, > L_SHIPMODE STRING, > L_COMMENTSTRING, > WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE, > L_PROCTIME AS PROCTIME() > ) WITH ( > 'connector.type' = 'kafka', > 'connector.version' = 'universal', > 'connector.topic' = 'Lineitem', > 'connector.properties.zookeeper.connect' = 'not-needed', > 'connector.properties.bootstrap.servers' = 'kafka:9092', > 'connector.startup-mode' = 'earliest-offset', > 'format.type' = 'csv', > 'format.field-delimiter' = '|' > ); > {code} > Query: > {code} > SELECT * FROM prod_lineitem; > {code} > Result: > {code} > [ERROR] Could not execute SQL statement. Reason: > java.lang.AssertionError: Conversion to relational algebra failed to preserve > datatypes: > validated type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL > L_PROCTIME) NOT NULL > converted type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME > ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL > rel: > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[$15]) > LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, > 30:INTERVAL MINUTE)]) > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[PROCTIME()]) > LogicalTableScan(table=[[hcat, default, prod_lineitem, source: > [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, > L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, > L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]]) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog
[ https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee reassigned FLINK-17189: Assignee: Jingsong Lee > Table with processing time attribute can not be read from Hive catalog > -- > > Key: FLINK-17189 > URL: https://issues.apache.org/jira/browse/FLINK-17189 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem, Table SQL / Planner >Reporter: Timo Walther >Assignee: Jingsong Lee >Priority: Blocker > Fix For: 1.11.0, 1.10.2 > > > DDL: > {code} > CREATE TABLE PROD_LINEITEM ( > L_ORDERKEY INTEGER, > L_PARTKEYINTEGER, > L_SUPPKEYINTEGER, > L_LINENUMBER INTEGER, > L_QUANTITY DOUBLE, > L_EXTENDEDPRICE DOUBLE, > L_DISCOUNT DOUBLE, > L_TAXDOUBLE, > L_CURRENCY STRING, > L_RETURNFLAG STRING, > L_LINESTATUS STRING, > L_ORDERTIME TIMESTAMP(3), > L_SHIPINSTRUCT STRING, > L_SHIPMODE STRING, > L_COMMENTSTRING, > WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE, > L_PROCTIME AS PROCTIME() > ) WITH ( > 'connector.type' = 'kafka', > 'connector.version' = 'universal', > 'connector.topic' = 'Lineitem', > 'connector.properties.zookeeper.connect' = 'not-needed', > 'connector.properties.bootstrap.servers' = 'kafka:9092', > 'connector.startup-mode' = 'earliest-offset', > 'format.type' = 'csv', > 'format.field-delimiter' = '|' > ); > {code} > Query: > {code} > SELECT * FROM prod_lineitem; > {code} > Result: > {code} > [ERROR] Could not execute SQL statement. Reason: > java.lang.AssertionError: Conversion to relational algebra failed to preserve > datatypes: > validated type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL > L_PROCTIME) NOT NULL > converted type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME > ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL > rel: > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[$15]) > LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, > 30:INTERVAL MINUTE)]) > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[PROCTIME()]) > LogicalTableScan(table=[[hcat, default, prod_lineitem, source: > [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, > L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, > L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]]) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
flinkbot edited a comment on pull request #12230: URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457 ## CI report: * 5b8eb4accc4478106f3e842ba18a1abc11194a43 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1845) * 2f0ca570ff878cd12f999570590a08fa75efcc6b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2
flinkbot edited a comment on pull request #12215: URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332 ## CI report: * 906be78b0943a61b70d4624b95bad5479c9f3d92 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1896) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
yangyichao-mango commented on a change in pull request #12230: URL: https://github.com/apache/flink/pull/12230#discussion_r427725970 ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language governing permissions and limitations under the License. --> -There are many ways to get started with Apache Flink. Which one is the best for -you depends on your goals and prior experience: +上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。 -* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on, - docker-based introduction to specific Flink concepts -* explore one of the **Code Walkthroughs** if you want a quick, end-to-end - introduction to one of Flink's APIs -* work your way through the **Hands-on Training** for a comprehensive, - step-by-step introduction to Flink -* use **Project Setup** if you already know the basics of Flink and want a - project template for Java or Scala, or need help setting up the dependencies +* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。 +* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。 +* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。 +* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。 -### Taking a first look at Flink +### 初识 Flink -The **Docker Playgrounds** provide sandboxed Flink environments that are set up in just a few minutes and which allow you to explore and play with Flink. +通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。 -* The [**Operations Playground**]({% link getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics. +* [**Flink Operations Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。 -### First steps with one of Flink's APIs +### Flink API 入门 -The **Code Walkthroughs** are a great way to get started quickly with a step-by-step introduction to -one of Flink's APIs. Each walkthrough provides instructions for bootstrapping a small skeleton -project, and then shows how to extend it to a simple application. +**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 代码框架,并如何逐步将其扩展为简单的应用程序。 -* The [**DataStream API** code walkthrough]({% link getting-started/walkthroughs/datastream_api.md %}) shows how - to implement a simple DataStream application and how to extend it to be stateful and use timers. - The DataStream API is Flink's main abstraction for implementing stateful streaming applications - with sophisticated time semantics in Java or Scala. + +* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 语言中实现具有复杂时间语义的有状态数据流处理的应用程序。 -* Flink's **Table API** is a relational API used for writing SQL-like queries in Java, Scala, or - Python, which are then automatically optimized, and can be executed on batch or streaming data - with identical syntax and semantics. The [Table API code walkthrough for Java and Scala]({% link - getting-started/walkthroughs/table_api.md %}) shows how to implement a simple Table API query on a - batch source and how to evolve it into a continuous query on a streaming source. There's also a - similar [code walkthrough for the Python Table API]({% link - getting-started/walkthroughs/python_table_api.md %}). +* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the Python Table API](./walkthroughs/python_table_api.html)。 -### Taking a Deep Dive with the Hands-on Training +### 通过实操进一步探索 Flink -The [**Hands-on Training**]({% link training/index.md %}) is a self-paced training course with -a set of lessons and hands-on exercises. This step-by-step introduction to Flink focuses -on learning how to use the DataStream API to meet the needs of common, real-world use cases, -and provides a complete introduction to the fundamental concepts: parallel dataflows, -stateful stream processing, event time and watermarking, and fault tolerance via state snapshots. +[Hands-on Training](/zh/training/index.html) 是一系列可供自主学习的练习课程。这些课程会循序渐进的介绍 Flink,包括如何使用 DataStream API 来满足常见的、真实的需求场景,并提供对 Flink 中并行数据流(parallel dataflows)、有状态流式处理(stateful stream processing)、Event Time、Watermarking、通过状态快照实现容错(fault tolerance via state snapshots)等基本概念的完整介绍。 Review comment: 我这边的外链是和旧版本翻译中的外链保持了一致,需要的话我尝试改成原文中新版本的外链 This is an
[GitHub] [flink] flinkbot edited a comment on pull request #11175: [FLINK-16197][hive] Failed to query partitioned table when partition …
flinkbot edited a comment on pull request #11175: URL: https://github.com/apache/flink/pull/11175#issuecomment-589671100 ## CI report: * f41f4359a68f8c9b85a33d3414bf346e02c17d6a Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1842) * 7cf8bc2371f60ce02daec08bda96b30e8ab94a32 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1900) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
yangyichao-mango commented on a change in pull request #12230: URL: https://github.com/apache/flink/pull/12230#discussion_r427726043 ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language governing permissions and limitations under the License. --> -There are many ways to get started with Apache Flink. Which one is the best for -you depends on your goals and prior experience: +上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。 -* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on, - docker-based introduction to specific Flink concepts -* explore one of the **Code Walkthroughs** if you want a quick, end-to-end - introduction to one of Flink's APIs -* work your way through the **Hands-on Training** for a comprehensive, - step-by-step introduction to Flink -* use **Project Setup** if you already know the basics of Flink and want a - project template for Java or Scala, or need help setting up the dependencies +* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。 +* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。 +* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。 +* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。 -### Taking a first look at Flink +### 初识 Flink -The **Docker Playgrounds** provide sandboxed Flink environments that are set up in just a few minutes and which allow you to explore and play with Flink. +通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。 -* The [**Operations Playground**]({% link getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics. +* [**Flink Operations Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。 -### First steps with one of Flink's APIs +### Flink API 入门 -The **Code Walkthroughs** are a great way to get started quickly with a step-by-step introduction to -one of Flink's APIs. Each walkthrough provides instructions for bootstrapping a small skeleton -project, and then shows how to extend it to a simple application. +**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 代码框架,并如何逐步将其扩展为简单的应用程序。 -* The [**DataStream API** code walkthrough]({% link getting-started/walkthroughs/datastream_api.md %}) shows how - to implement a simple DataStream application and how to extend it to be stateful and use timers. - The DataStream API is Flink's main abstraction for implementing stateful streaming applications - with sophisticated time semantics in Java or Scala. + +* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 语言中实现具有复杂时间语义的有状态数据流处理的应用程序。 -* Flink's **Table API** is a relational API used for writing SQL-like queries in Java, Scala, or - Python, which are then automatically optimized, and can be executed on batch or streaming data - with identical syntax and semantics. The [Table API code walkthrough for Java and Scala]({% link - getting-started/walkthroughs/table_api.md %}) shows how to implement a simple Table API query on a - batch source and how to evolve it into a continuous query on a streaming source. There's also a - similar [code walkthrough for the Python Table API]({% link - getting-started/walkthroughs/python_table_api.md %}). +* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the Python Table API](./walkthroughs/python_table_api.html)。 -### Taking a Deep Dive with the Hands-on Training +### 通过实操进一步探索 Flink -The [**Hands-on Training**]({% link training/index.md %}) is a self-paced training course with -a set of lessons and hands-on exercises. This step-by-step introduction to Flink focuses -on learning how to use the DataStream API to meet the needs of common, real-world use cases, -and provides a complete introduction to the fundamental concepts: parallel dataflows, -stateful stream processing, event time and watermarking, and fault tolerance via state snapshots. +[Hands-on Training](/zh/training/index.html) 是一系列可供自主学习的练习课程。这些课程会循序渐进的介绍 Flink,包括如何使用 DataStream API 来满足常见的、真实的需求场景,并提供对 Flink 中并行数据流(parallel dataflows)、有状态流式处理(stateful stream processing)、Event Time、Watermarking、通过状态快照实现容错(fault tolerance via state snapshots)等基本概念的完整介绍。 Review comment: 辛苦~ This is an automated message from the Apache Git
[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
yangyichao-mango commented on a change in pull request #12230: URL: https://github.com/apache/flink/pull/12230#discussion_r427725668 ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language governing permissions and limitations under the License. --> -There are many ways to get started with Apache Flink. Which one is the best for -you depends on your goals and prior experience: +上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。 -* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on, - docker-based introduction to specific Flink concepts -* explore one of the **Code Walkthroughs** if you want a quick, end-to-end - introduction to one of Flink's APIs -* work your way through the **Hands-on Training** for a comprehensive, - step-by-step introduction to Flink -* use **Project Setup** if you already know the basics of Flink and want a - project template for Java or Scala, or need help setting up the dependencies +* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。 +* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。 +* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。 +* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。 -### Taking a first look at Flink +### 初识 Flink -The **Docker Playgrounds** provide sandboxed Flink environments that are set up in just a few minutes and which allow you to explore and play with Flink. +通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。 -* The [**Operations Playground**]({% link getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics. +* [**Flink Operations Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。 -### First steps with one of Flink's APIs +### Flink API 入门 -The **Code Walkthroughs** are a great way to get started quickly with a step-by-step introduction to -one of Flink's APIs. Each walkthrough provides instructions for bootstrapping a small skeleton -project, and then shows how to extend it to a simple application. +**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 代码框架,并如何逐步将其扩展为简单的应用程序。 -* The [**DataStream API** code walkthrough]({% link getting-started/walkthroughs/datastream_api.md %}) shows how - to implement a simple DataStream application and how to extend it to be stateful and use timers. - The DataStream API is Flink's main abstraction for implementing stateful streaming applications - with sophisticated time semantics in Java or Scala. + +* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 语言中实现具有复杂时间语义的有状态数据流处理的应用程序。 -* Flink's **Table API** is a relational API used for writing SQL-like queries in Java, Scala, or - Python, which are then automatically optimized, and can be executed on batch or streaming data - with identical syntax and semantics. The [Table API code walkthrough for Java and Scala]({% link - getting-started/walkthroughs/table_api.md %}) shows how to implement a simple Table API query on a - batch source and how to evolve it into a continuous query on a streaming source. There's also a - similar [code walkthrough for the Python Table API]({% link - getting-started/walkthroughs/python_table_api.md %}). +* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the Python Table API](./walkthroughs/python_table_api.html)。 Review comment: ”语言嵌入式关系 API“也是旧版本的翻译,我在下一个commit中重新进行翻译下哈,谢谢~ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
yangyichao-mango commented on a change in pull request #12230: URL: https://github.com/apache/flink/pull/12230#discussion_r427725466 ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language governing permissions and limitations under the License. --> -There are many ways to get started with Apache Flink. Which one is the best for -you depends on your goals and prior experience: +上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。 -* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on, - docker-based introduction to specific Flink concepts -* explore one of the **Code Walkthroughs** if you want a quick, end-to-end - introduction to one of Flink's APIs -* work your way through the **Hands-on Training** for a comprehensive, - step-by-step introduction to Flink -* use **Project Setup** if you already know the basics of Flink and want a - project template for Java or Scala, or need help setting up the dependencies +* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。 +* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。 +* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。 +* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。 -### Taking a first look at Flink +### 初识 Flink -The **Docker Playgrounds** provide sandboxed Flink environments that are set up in just a few minutes and which allow you to explore and play with Flink. +通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。 -* The [**Operations Playground**]({% link getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics. +* [**Flink Operations Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。 -### First steps with one of Flink's APIs +### Flink API 入门 -The **Code Walkthroughs** are a great way to get started quickly with a step-by-step introduction to -one of Flink's APIs. Each walkthrough provides instructions for bootstrapping a small skeleton -project, and then shows how to extend it to a simple application. +**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 代码框架,并如何逐步将其扩展为简单的应用程序。 -* The [**DataStream API** code walkthrough]({% link getting-started/walkthroughs/datastream_api.md %}) shows how - to implement a simple DataStream application and how to extend it to be stateful and use timers. - The DataStream API is Flink's main abstraction for implementing stateful streaming applications - with sophisticated time semantics in Java or Scala.
[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
yangyichao-mango commented on a change in pull request #12230: URL: https://github.com/apache/flink/pull/12230#discussion_r427725122 ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language governing permissions and limitations under the License. --> -There are many ways to get started with Apache Flink. Which one is the best for -you depends on your goals and prior experience: +上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。 -* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on, - docker-based introduction to specific Flink concepts -* explore one of the **Code Walkthroughs** if you want a quick, end-to-end - introduction to one of Flink's APIs -* work your way through the **Hands-on Training** for a comprehensive, - step-by-step introduction to Flink -* use **Project Setup** if you already know the basics of Flink and want a - project template for Java or Scala, or need help setting up the dependencies +* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。 +* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。 +* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。 +* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。 -### Taking a first look at Flink +### 初识 Flink -The **Docker Playgrounds** provide sandboxed Flink environments that are set up in just a few minutes and which allow you to explore and play with Flink. +通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。 -* The [**Operations Playground**]({% link getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics. +* [**Flink Operations Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。 Review comment: 这部分是旧版本的中文翻译,由于我做的这个issue中的英文改动没有涉及到这一部分,所以我就没有改动这部分的中文翻译,如果需要重新翻译的话我可以翻译后提交新commit This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
yangyichao-mango commented on a change in pull request #12230: URL: https://github.com/apache/flink/pull/12230#discussion_r427724564 ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language governing permissions and limitations under the License. --> -There are many ways to get started with Apache Flink. Which one is the best for -you depends on your goals and prior experience: +上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。 -* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on, - docker-based introduction to specific Flink concepts -* explore one of the **Code Walkthroughs** if you want a quick, end-to-end - introduction to one of Flink's APIs -* work your way through the **Hands-on Training** for a comprehensive, - step-by-step introduction to Flink -* use **Project Setup** if you already know the basics of Flink and want a - project template for Java or Scala, or need help setting up the dependencies +* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。 Review comment: 嗯嗯,改为 阅读 XXX 可以 YYY 我觉得会更通顺,稍后我会提交新commit This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2
[ https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111718#comment-17111718 ] Yang Wang commented on FLINK-17565: --- I have upgraded the priority to critical so that it could be tracked in the 1.11 kanban[1]. It should be fixed before the release RC. [1]. [https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=364=FLINK] > Bump fabric8 version from 4.5.2 to 4.9.2 > > > Key: FLINK-17565 > URL: https://issues.apache.org/jira/browse/FLINK-17565 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Reporter: Canbin Zheng >Assignee: Canbin Zheng >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently, we are using a version of 4.5.2, it's better that we upgrade it to > 4.9.2, some of the reasons are as follows: > # It removed the use of reapers manually doing cascade deletion of > resources, leave it up to Kubernetes APIServer, which solves the issue of > FLINK-17566, more info: > [https://github.com/fabric8io/kubernetes-client/issues/1880] > # It introduced a regression in building Quantity values in 4.7.0, release > note [https://github.com/fabric8io/kubernetes-client/issues/1953]. > # It provided better support for K8s 1.17, release note: > [https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0]. > For more release notes, please refer to [fabric8 > releases|https://github.com/fabric8io/kubernetes-client/releases]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2
[ https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang updated FLINK-17565: -- Priority: Critical (was: Major) > Bump fabric8 version from 4.5.2 to 4.9.2 > > > Key: FLINK-17565 > URL: https://issues.apache.org/jira/browse/FLINK-17565 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Reporter: Canbin Zheng >Assignee: Canbin Zheng >Priority: Critical > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently, we are using a version of 4.5.2, it's better that we upgrade it to > 4.9.2, some of the reasons are as follows: > # It removed the use of reapers manually doing cascade deletion of > resources, leave it up to Kubernetes APIServer, which solves the issue of > FLINK-17566, more info: > [https://github.com/fabric8io/kubernetes-client/issues/1880] > # It introduced a regression in building Quantity values in 4.7.0, release > note [https://github.com/fabric8io/kubernetes-client/issues/1953]. > # It provided better support for K8s 1.17, release note: > [https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0]. > For more release notes, please refer to [fabric8 > releases|https://github.com/fabric8io/kubernetes-client/releases]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2
[ https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang updated FLINK-17565: -- Description: Currently, we are using a version of 4.5.2, it's better that we upgrade it to 4.9.2, some of the reasons are as follows: # It removed the use of reapers manually doing cascade deletion of resources, leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, more info: [https://github.com/fabric8io/kubernetes-client/issues/1880] # It introduced a regression in building Quantity values in 4.7.0, release note [https://github.com/fabric8io/kubernetes-client/issues/1953]. # It provided better support for K8s 1.17, release note: [https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0]. For more release notes, please refer to [fabric8 releases|https://github.com/fabric8io/kubernetes-client/releases]. was: Currently, we are using a version of 4.5.2, it's better that we upgrade it to 4.9.1, some of the reasons are as follows: # It removed the use of reapers manually doing cascade deletion of resources, leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, more info: https://github.com/fabric8io/kubernetes-client/issues/1880 # It introduced a regression in building Quantity values in 4.7.0, release note https://github.com/fabric8io/kubernetes-client/issues/1953. # It provided better support for K8s 1.17, release note: https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0. For more release notes, please refer to [fabric8 releases|https://github.com/fabric8io/kubernetes-client/releases]. > Bump fabric8 version from 4.5.2 to 4.9.2 > > > Key: FLINK-17565 > URL: https://issues.apache.org/jira/browse/FLINK-17565 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Reporter: Canbin Zheng >Assignee: Canbin Zheng >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently, we are using a version of 4.5.2, it's better that we upgrade it to > 4.9.2, some of the reasons are as follows: > # It removed the use of reapers manually doing cascade deletion of > resources, leave it up to Kubernetes APIServer, which solves the issue of > FLINK-17566, more info: > [https://github.com/fabric8io/kubernetes-client/issues/1880] > # It introduced a regression in building Quantity values in 4.7.0, release > note [https://github.com/fabric8io/kubernetes-client/issues/1953]. > # It provided better support for K8s 1.17, release note: > [https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0]. > For more release notes, please refer to [fabric8 > releases|https://github.com/fabric8io/kubernetes-client/releases]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default
flinkbot edited a comment on pull request #12240: URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048 ## CI report: * 7ae117dbf4d94f345f70d6f1e8cec97f71086a36 Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1820) * fc462938ff28feca6fd689f6e51e1fca79efe975 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2
[ https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang updated FLINK-17565: -- Summary: Bump fabric8 version from 4.5.2 to 4.9.2 (was: Bump fabric8 version from 4.5.2 to 4.9.1) > Bump fabric8 version from 4.5.2 to 4.9.2 > > > Key: FLINK-17565 > URL: https://issues.apache.org/jira/browse/FLINK-17565 > Project: Flink > Issue Type: Improvement > Components: Deployment / Kubernetes >Reporter: Canbin Zheng >Assignee: Canbin Zheng >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > Currently, we are using a version of 4.5.2, it's better that we upgrade it to > 4.9.1, some of the reasons are as follows: > # It removed the use of reapers manually doing cascade deletion of resources, > leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, > more info: https://github.com/fabric8io/kubernetes-client/issues/1880 > # It introduced a regression in building Quantity values in 4.7.0, release > note https://github.com/fabric8io/kubernetes-client/issues/1953. > # It provided better support for K8s 1.17, release note: > https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0. > For more release notes, please refer to [fabric8 > releases|https://github.com/fabric8io/kubernetes-client/releases]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17351) CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts
[ https://issues.apache.org/jira/browse/FLINK-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111716#comment-17111716 ] Yuan Mei commented on FLINK-17351: -- Thanks for the pointers [~roman_khachatryan]. I have quite a nice walk ;) I guess the fix is simple: increase `continuousFailureCounter` for exception `CHECKPOINT_EXPIRED` as well. However, there is a list of checkpoint failure reasons listed (actually most of the reasons) are ignored. Hence I am wondering what is the criteria for what should be ignored, and what should not? > CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts > -- > > Key: FLINK-17351 > URL: https://issues.apache.org/jira/browse/FLINK-17351 > Project: Flink > Issue Type: Bug > Components: Runtime / Checkpointing >Affects Versions: 1.9.2, 1.10.0 >Reporter: Piotr Nowojski >Priority: Critical > Fix For: 1.11.0 > > > As described in point 2: > https://issues.apache.org/jira/browse/FLINK-17327?focusedCommentId=17090576=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17090576 > (copy of description from above linked comment): > The logic in how {{CheckpointCoordinator}} handles checkpoint timeouts is > broken. In your [~qinjunjerry] examples, your job should have failed after > first checkpoint failure, but checkpoints were time outing on > CheckpointCoordinator after 5 seconds, before {{FlinkKafkaProducer}} was > detecting Kafka failure after 2 minutes. Those timeouts were not checked > against {{setTolerableCheckpointFailureNumber(...)}} limit, so the job was > keep going with many timed out checkpoints. Now funny thing happens: > FlinkKafkaProducer detects Kafka failure. Funny thing is that it depends > where the failure was detected: > a) on processing record? no problem, job will failover immediately once > failure is detected (in this example after 2 minutes) > b) on checkpoint? heh, the failure is reported to {{CheckpointCoordinator}} > *and gets ignored, as PendingCheckpoint has already been discarded 2 minutes > ago* :) So theoretically the checkpoints can keep failing forever and the job > will not restart automatically, unless something else fails. > Even more funny things can happen if we mix FLINK-17350 . or b) with > intermittent external system failure. Sink reports an exception, transaction > was lost/aborted, Sink is in failed state, but if there will be a happy > coincidence that it manages to accept further records, this exception can be > lost and all of the records in those failed checkpoints will be lost forever > as well. In all of the examples that [~qinjunjerry] posted it hasn't > happened. {{FlinkKafkaProducer}} was not able to recover after the initial > failure and it was keep throwing exceptions until the job finally failed (but > much later then it should have). And that's not guaranteed anywhere. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wangyang0918 commented on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2
wangyang0918 commented on pull request #12215: URL: https://github.com/apache/flink/pull/12215#issuecomment-631210861 @zhengcanbin Thanks a lot for creating this PR. I am afraid this PR could not work because the new version introduce some additional dependencies(e.g. `com.fasterxml.jackson.datatype:jackson-datatype-jsr310`). Could you please check for that? ``` 2020-05-18 14:22:19,882 INFO org.apache.flink.client.deployment.DefaultClusterClientServiceLoader [] - Could not load factory due to missing dependencies. Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/kubernetes/shaded/com/fasterxml/jackson/datatype/jsr310/JavaTimeModule at io.fabric8.kubernetes.client.internal.KubeConfigUtils.parseConfigFromString(KubeConfigUtils.java:44) at io.fabric8.kubernetes.client.Config.loadFromKubeconfig(Config.java:505) at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:491) at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:230) at io.fabric8.kubernetes.client.Config.(Config.java:214) at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:225) at org.apache.flink.kubernetes.kubeclient.KubeClientFactory.fromConfiguration(KubeClientFactory.java:69) at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:58) at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:39) at org.apache.flink.kubernetes.cli.KubernetesSessionCli.run(KubernetesSessionCli.java:95) at org.apache.flink.kubernetes.cli.KubernetesSessionCli.lambda$main$0(KubernetesSessionCli.java:185) at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.kubernetes.cli.KubernetesSessionCli.main(KubernetesSessionCli.java:185) Caused by: java.lang.ClassNotFoundException: org.apache.flink.kubernetes.shaded.com.fasterxml.jackson.datatype.jsr310.JavaTimeModule at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 13 more ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] wangyang0918 edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2
wangyang0918 edited a comment on pull request #12215: URL: https://github.com/apache/flink/pull/12215#issuecomment-631210861 @zhengcanbin Thanks a lot for creating this PR. I am afraid this PR could not work because the new `kubernetes-client` version introduce some additional dependencies(e.g. `com.fasterxml.jackson.datatype:jackson-datatype-jsr310`). Could you please check for that? ``` 2020-05-18 14:22:19,882 INFO org.apache.flink.client.deployment.DefaultClusterClientServiceLoader [] - Could not load factory due to missing dependencies. Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/kubernetes/shaded/com/fasterxml/jackson/datatype/jsr310/JavaTimeModule at io.fabric8.kubernetes.client.internal.KubeConfigUtils.parseConfigFromString(KubeConfigUtils.java:44) at io.fabric8.kubernetes.client.Config.loadFromKubeconfig(Config.java:505) at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:491) at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:230) at io.fabric8.kubernetes.client.Config.(Config.java:214) at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:225) at org.apache.flink.kubernetes.kubeclient.KubeClientFactory.fromConfiguration(KubeClientFactory.java:69) at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:58) at org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:39) at org.apache.flink.kubernetes.cli.KubernetesSessionCli.run(KubernetesSessionCli.java:95) at org.apache.flink.kubernetes.cli.KubernetesSessionCli.lambda$main$0(KubernetesSessionCli.java:185) at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30) at org.apache.flink.kubernetes.cli.KubernetesSessionCli.main(KubernetesSessionCli.java:185) Caused by: java.lang.ClassNotFoundException: org.apache.flink.kubernetes.shaded.com.fasterxml.jackson.datatype.jsr310.JavaTimeModule at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 13 more ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] klion26 commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…
klion26 commented on a change in pull request #12230: URL: https://github.com/apache/flink/pull/12230#discussion_r427718620 ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language governing permissions and limitations under the License. --> -There are many ways to get started with Apache Flink. Which one is the best for -you depends on your goals and prior experience: +上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。 -* take a look at the **Docker Playgrounds** if you want to see what Flink can do, via a hands-on, - docker-based introduction to specific Flink concepts -* explore one of the **Code Walkthroughs** if you want a quick, end-to-end - introduction to one of Flink's APIs -* work your way through the **Hands-on Training** for a comprehensive, - step-by-step introduction to Flink -* use **Project Setup** if you already know the basics of Flink and want a - project template for Java or Scala, or need help setting up the dependencies +* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。 +* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。 +* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。 +* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。 -### Taking a first look at Flink +### 初识 Flink -The **Docker Playgrounds** provide sandboxed Flink environments that are set up in just a few minutes and which allow you to explore and play with Flink. +通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。 -* The [**Operations Playground**]({% link getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you how to operate streaming applications with Flink. You can experience how Flink recovers application from failures, upgrade and scale streaming applications up and down, and query application metrics. +* [**Flink Operations Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。 -### First steps with one of Flink's APIs +### Flink API 入门 -The **Code Walkthroughs** are a great way to get started quickly with a step-by-step introduction to -one of Flink's APIs. Each walkthrough provides instructions for bootstrapping a small skeleton -project, and then shows how to extend it to a simple application. +**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 代码框架,并如何逐步将其扩展为简单的应用程序。 -* The [**DataStream API** code walkthrough]({% link getting-started/walkthroughs/datastream_api.md %}) shows how - to implement a simple DataStream application and how to extend it to be stateful and use timers. - The DataStream API is Flink's main abstraction for implementing stateful streaming applications - with sophisticated time semantics in Java or Scala. + +* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 语言中实现具有复杂时间语义的有状态数据流处理的应用程序。 -* Flink's **Table API** is a relational API used for writing SQL-like queries in Java, Scala, or - Python, which are then automatically optimized, and can be executed on batch or streaming data - with identical syntax and semantics. The [Table API code walkthrough for Java and Scala]({% link - getting-started/walkthroughs/table_api.md %}) shows how to implement a simple Table API query on a - batch source and how to evolve it into a continuous query on a streaming source. There's also a - similar [code walkthrough for the Python Table API]({% link - getting-started/walkthroughs/python_table_api.md %}). +* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the Python Table API](./walkthroughs/python_table_api.html)。 -### Taking a Deep Dive with the Hands-on Training +### 通过实操进一步探索 Flink -The [**Hands-on Training**]({% link training/index.md %}) is a self-paced training course with -a set of lessons and hands-on exercises. This step-by-step introduction to Flink focuses -on learning how to use the DataStream API to meet the needs of common, real-world use cases, -and provides a complete introduction to the fundamental concepts: parallel dataflows, -stateful stream processing, event time and watermarking, and fault tolerance via state snapshots. +[Hands-on Training](/zh/training/index.html) 是一系列可供自主学习的练习课程。这些课程会循序渐进的介绍 Flink,包括如何使用 DataStream API 来满足常见的、真实的需求场景,并提供对 Flink 中并行数据流(parallel dataflows)、有状态流式处理(stateful stream processing)、Event Time、Watermarking、通过状态快照实现容错(fault tolerance via state snapshots)等基本概念的完整介绍。 Review comment: 为什么要修改这个 外链 的形式呢? ## File path: docs/getting-started/index.zh.md ## @@ -27,54 +27,37 @@ specific language
[jira] [Commented] (FLINK-17821) Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP
[ https://issues.apache.org/jira/browse/FLINK-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111711#comment-17111711 ] Lijie Wang commented on FLINK-17821: Dose this duplicate with https://issues.apache.org/jira/browse/FLINK-12030 ? > Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP > > > Key: FLINK-17821 > URL: https://issues.apache.org/jira/browse/FLINK-17821 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.12.0 >Reporter: Zhu Zhu >Priority: Critical > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1871=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8=12032 > 2020-05-19T16:29:40.7239430Z Test testKafkaSourceSink[legacy = false, topicId > = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) > failed with: > 2020-05-19T16:29:40.7240291Z java.util.concurrent.ExecutionException: > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > 2020-05-19T16:29:40.7241033Z at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) > 2020-05-19T16:29:40.7241542Z at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) > 2020-05-19T16:29:40.7242127Z at > org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31) > 2020-05-19T16:29:40.7242729Z at > org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala) > 2020-05-19T16:29:40.7243239Z at > org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145) > 2020-05-19T16:29:40.7243691Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T16:29:40.7244273Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T16:29:40.7244729Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T16:29:40.7245117Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T16:29:40.7245515Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T16:29:40.7245956Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T16:29:40.7246419Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T16:29:40.7246870Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T16:29:40.7247287Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T16:29:40.7251320Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T16:29:40.7251833Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T16:29:40.7252251Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T16:29:40.7252716Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T16:29:40.7253117Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T16:29:40.7253502Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T16:29:40.7254041Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T16:29:40.7254528Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T16:29:40.7255500Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T16:29:40.7256064Z at > org.junit.runners.ParentRunner.run(ParentRunner.java:363) > 2020-05-19T16:29:40.7256438Z at > org.junit.runners.Suite.runChild(Suite.java:128) > 2020-05-19T16:29:40.7256758Z at > org.junit.runners.Suite.runChild(Suite.java:27) > 2020-05-19T16:29:40.7257118Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T16:29:40.7257486Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T16:29:40.7257885Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T16:29:40.7258389Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T16:29:40.7258821Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T16:29:40.7259219Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T16:29:40.7259664Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T16:29:40.7260098Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) >
[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2
flinkbot edited a comment on pull request #12215: URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332 ## CI report: * 5f357acabcb13d64d8e9a042af14329415db0f87 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1708) * 906be78b0943a61b70d4624b95bad5479c9f3d92 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1896) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12259: [hotfix][k8s] Remove unused constant variable
flinkbot edited a comment on pull request #12259: URL: https://github.com/apache/flink/pull/12259#issuecomment-631191345 ## CI report: * 1ee1aadd85244dccac74b71c63f21379195b112b Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1897) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111709#comment-17111709 ] Dian Fu edited comment on FLINK-17822 at 5/20/20, 3:08 AM: --- There are several Java 11 tests in the same cron job failed with the same exception: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b was (Author: dian.fu): It seems that all the Java 11 tests in the same cron job failed with this exception: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at >
[GitHub] [flink] flinkbot edited a comment on pull request #11175: [FLINK-16197][hive] Failed to query partitioned table when partition …
flinkbot edited a comment on pull request #11175: URL: https://github.com/apache/flink/pull/11175#issuecomment-589671100 ## CI report: * f41f4359a68f8c9b85a33d3414bf346e02c17d6a Azure: [CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1842) * 7cf8bc2371f60ce02daec08bda96b30e8ab94a32 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111709#comment-17111709 ] Dian Fu edited comment on FLINK-17822 at 5/20/20, 3:02 AM: --- It seems that all the Java 11 tests in the same cron job failed with this exception: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b was (Author: dian.fu): It seems that all the Java 11 tests in the same cron job failed with this error: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at >
[jira] [Commented] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111709#comment-17111709 ] Dian Fu commented on FLINK-17822: - It seems that all the Java 11 tests in the same cron job failed with this error: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at >
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-17822: Summary: Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11 (was: Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11 ) > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in Java 11 > -- > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] >
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-17822: Component/s: Runtime / Task > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in JDK 11 > - > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8848295Z at > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] >
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-17822: Component/s: Tests > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in JDK 11 > - > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug > Components: Runtime / Task, Tests >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8848295Z at > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] >
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-17822: Labels: test-stability (was: ) > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in JDK 11 > - > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8848295Z at > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] > 2020-05-19T21:59:39.8848732Z at >
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-17822: Affects Version/s: 1.11.0 > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in JDK 11 > - > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug >Affects Versions: 1.11.0 >Reporter: Dian Fu >Priority: Major > Labels: test-stability > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8848295Z at > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] > 2020-05-19T21:59:39.8848732Z at >
[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11
[ https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dian Fu updated FLINK-17822: Summary: Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11 (was: Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11 ) > Nightly Flink CLI end-to-end test failed with > "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class > jdk.internal.misc.SharedSecrets" in JDK 11 > - > > Key: FLINK-17822 > URL: https://issues.apache.org/jira/browse/FLINK-17822 > Project: Flink > Issue Type: Bug >Reporter: Dian Fu >Priority: Major > > Instance: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 > {code} > 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR > org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL > UNEXPECTED - Failed to invoke waitForReferenceProcessing > 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot > access class jdk.internal.misc.SharedSecrets (in module java.base) because > module java.base does not export jdk.internal.misc to unnamed module @54e3658c > 2020-05-19T21:59:39.8830707Z at > jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) > ~[?:?] > 2020-05-19T21:59:39.8831166Z at > java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) > ~[?:?] > 2020-05-19T21:59:39.8831744Z at > java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] > 2020-05-19T21:59:39.8832596Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8833667Z at > org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8834712Z at > org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8835686Z at > org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8836652Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8838033Z at > org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8839259Z at > org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8840148Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841035Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8841603Z at > java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) > ~[?:?] > 2020-05-19T21:59:39.8842069Z at > java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) > ~[?:?] > 2020-05-19T21:59:39.8842844Z at > java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) > ~[?:?] > 2020-05-19T21:59:39.8843828Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8844790Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8845754Z at > org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8846842Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) > ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] > 2020-05-19T21:59:39.8847711Z at > org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) >
[jira] [Created] (FLINK-17822) Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11
Dian Fu created FLINK-17822: --- Summary: Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11 Key: FLINK-17822 URL: https://issues.apache.org/jira/browse/FLINK-17822 Project: Flink Issue Type: Bug Reporter: Dian Fu Instance: https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600 {code} 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR org.apache.flink.util.JavaGcCleanerWrapper [] - FATAL UNEXPECTED - Failed to invoke waitForReferenceProcessing 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @54e3658c 2020-05-19T21:59:39.8830707Zat jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) ~[?:?] 2020-05-19T21:59:39.8831166Zat java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) ~[?:?] 2020-05-19T21:59:39.8831744Zat java.lang.reflect.Method.invoke(Method.java:558) ~[?:?] 2020-05-19T21:59:39.8832596Zat org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8833667Zat org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8834712Zat org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8835686Zat org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8836652Zat org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8838033Zat org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8839259Zat org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8840148Zat org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8841035Zat org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8841603Zat java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) ~[?:?] 2020-05-19T21:59:39.8842069Zat java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) ~[?:?] 2020-05-19T21:59:39.8842844Zat java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) ~[?:?] 2020-05-19T21:59:39.8843828Zat org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8844790Zat org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8845754Zat org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8846842Zat org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8847711Zat org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8848295Zat jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] 2020-05-19T21:59:39.8848732Zat jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] 2020-05-19T21:59:39.8849228Zat jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] 2020-05-19T21:59:39.8849669Zat java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] 2020-05-19T21:59:39.8850656Zat org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284) ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT] 2020-05-19T21:59:39.8851589Z
[GitHub] [flink] godfreyhe commented on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result
godfreyhe commented on pull request #12199: URL: https://github.com/apache/flink/pull/12199#issuecomment-631204000 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (FLINK-17821) Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP
Zhu Zhu created FLINK-17821: --- Summary: Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP Key: FLINK-17821 URL: https://issues.apache.org/jira/browse/FLINK-17821 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.12.0 Reporter: Zhu Zhu https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1871=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8=12032 2020-05-19T16:29:40.7239430Z Test testKafkaSourceSink[legacy = false, topicId = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) failed with: 2020-05-19T16:29:40.7240291Z java.util.concurrent.ExecutionException: org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 2020-05-19T16:29:40.7241033Zat java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 2020-05-19T16:29:40.7241542Zat java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 2020-05-19T16:29:40.7242127Zat org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31) 2020-05-19T16:29:40.7242729Zat org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala) 2020-05-19T16:29:40.7243239Zat org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145) 2020-05-19T16:29:40.7243691Zat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-19T16:29:40.7244273Zat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-19T16:29:40.7244729Zat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-19T16:29:40.7245117Zat java.lang.reflect.Method.invoke(Method.java:498) 2020-05-19T16:29:40.7245515Zat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) 2020-05-19T16:29:40.7245956Zat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) 2020-05-19T16:29:40.7246419Zat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) 2020-05-19T16:29:40.7246870Zat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) 2020-05-19T16:29:40.7247287Zat org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) 2020-05-19T16:29:40.7251320Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2020-05-19T16:29:40.7251833Zat org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 2020-05-19T16:29:40.7252251Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) 2020-05-19T16:29:40.7252716Zat org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) 2020-05-19T16:29:40.7253117Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-05-19T16:29:40.7253502Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-05-19T16:29:40.7254041Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-05-19T16:29:40.7254528Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-05-19T16:29:40.7255500Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-05-19T16:29:40.7256064Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-05-19T16:29:40.7256438Zat org.junit.runners.Suite.runChild(Suite.java:128) 2020-05-19T16:29:40.7256758Zat org.junit.runners.Suite.runChild(Suite.java:27) 2020-05-19T16:29:40.7257118Zat org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 2020-05-19T16:29:40.7257486Zat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 2020-05-19T16:29:40.7257885Zat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 2020-05-19T16:29:40.7258389Zat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 2020-05-19T16:29:40.7258821Zat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 2020-05-19T16:29:40.7259219Zat org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 2020-05-19T16:29:40.7259664Zat org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 2020-05-19T16:29:40.7260098Zat org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) 2020-05-19T16:29:40.7260635Zat org.junit.rules.RunRules.evaluate(RunRules.java:20) 2020-05-19T16:29:40.7261065Zat org.junit.runners.ParentRunner.run(ParentRunner.java:363) 2020-05-19T16:29:40.7261467Zat org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) 2020-05-19T16:29:40.7261952Zat org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
[ https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111700#comment-17111700 ] Kurt Young commented on FLINK-17817: cc [~TsReaper] > CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase > - > > Key: FLINK-17817 > URL: https://issues.apache.org/jira/browse/FLINK-17817 > Project: Flink > Issue Type: Bug > Components: API / DataStream, Tests >Affects Versions: 1.11.0 >Reporter: Robert Metzger >Priority: Blocker > Labels: test-stability > Fix For: 1.11.0 > > > CI: > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=f83cd372-208c-5ec4-12a8-337462457129 > {code} > 2020-05-19T10:34:18.3224679Z [ERROR] > testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase) > Time elapsed: 7.537 s <<< ERROR! > 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next > result > 2020-05-19T10:34:18.3227634Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92) > 2020-05-19T10:34:18.3228518Z at > org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63) > 2020-05-19T10:34:18.3229170Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361) > 2020-05-19T10:34:18.3229863Z at > org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160) > 2020-05-19T10:34:18.3230586Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300) > 2020-05-19T10:34:18.3231303Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141) > 2020-05-19T10:34:18.3231996Z at > org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107) > 2020-05-19T10:34:18.3232847Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176) > 2020-05-19T10:34:18.3233694Z at > org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122) > 2020-05-19T10:34:18.3234461Z at > sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > 2020-05-19T10:34:18.3234983Z at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > 2020-05-19T10:34:18.3235632Z at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > 2020-05-19T10:34:18.3236615Z at > java.lang.reflect.Method.invoke(Method.java:498) > 2020-05-19T10:34:18.3237256Z at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > 2020-05-19T10:34:18.3237965Z at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > 2020-05-19T10:34:18.3238750Z at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > 2020-05-19T10:34:18.3239314Z at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > 2020-05-19T10:34:18.3239838Z at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > 2020-05-19T10:34:18.3240362Z at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > 2020-05-19T10:34:18.3240803Z at > org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > 2020-05-19T10:34:18.3243624Z at > org.junit.rules.RunRules.evaluate(RunRules.java:20) > 2020-05-19T10:34:18.3244531Z at > org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > 2020-05-19T10:34:18.3245325Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > 2020-05-19T10:34:18.3246086Z at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > 2020-05-19T10:34:18.3246765Z at > org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > 2020-05-19T10:34:18.3247390Z at > org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > 2020-05-19T10:34:18.3248012Z at > org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > 2020-05-19T10:34:18.3248779Z at > org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > 2020-05-19T10:34:18.3249417Z at > org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > 2020-05-19T10:34:18.3250357Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) > 2020-05-19T10:34:18.3251021Z at > org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) >
[jira] [Updated] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog
[ https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-17189: - Priority: Blocker (was: Major) > Table with processing time attribute can not be read from Hive catalog > -- > > Key: FLINK-17189 > URL: https://issues.apache.org/jira/browse/FLINK-17189 > Project: Flink > Issue Type: Bug > Components: Table SQL / Ecosystem, Table SQL / Planner >Reporter: Timo Walther >Priority: Blocker > Fix For: 1.11.0, 1.10.2 > > > DDL: > {code} > CREATE TABLE PROD_LINEITEM ( > L_ORDERKEY INTEGER, > L_PARTKEYINTEGER, > L_SUPPKEYINTEGER, > L_LINENUMBER INTEGER, > L_QUANTITY DOUBLE, > L_EXTENDEDPRICE DOUBLE, > L_DISCOUNT DOUBLE, > L_TAXDOUBLE, > L_CURRENCY STRING, > L_RETURNFLAG STRING, > L_LINESTATUS STRING, > L_ORDERTIME TIMESTAMP(3), > L_SHIPINSTRUCT STRING, > L_SHIPMODE STRING, > L_COMMENTSTRING, > WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE, > L_PROCTIME AS PROCTIME() > ) WITH ( > 'connector.type' = 'kafka', > 'connector.version' = 'universal', > 'connector.topic' = 'Lineitem', > 'connector.properties.zookeeper.connect' = 'not-needed', > 'connector.properties.bootstrap.servers' = 'kafka:9092', > 'connector.startup-mode' = 'earliest-offset', > 'format.type' = 'csv', > 'format.field-delimiter' = '|' > ); > {code} > Query: > {code} > SELECT * FROM prod_lineitem; > {code} > Result: > {code} > [ERROR] Could not execute SQL statement. Reason: > java.lang.AssertionError: Conversion to relational algebra failed to preserve > datatypes: > validated type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL > L_PROCTIME) NOT NULL > converted type: > RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER > L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, > DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME > ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" > L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, > VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME > ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL > rel: > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[$15]) > LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, > 30:INTERVAL MINUTE)]) > LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], > L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], > L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], > L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], > L_PROCTIME=[PROCTIME()]) > LogicalTableScan(table=[[hcat, default, prod_lineitem, source: > [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, > L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, > L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]]) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot edited a comment on pull request #11978: [FLINK-16086][chinese-translation]Translate "Temporal Tables" page of "Streaming Concepts" into Chinese
flinkbot edited a comment on pull request #11978: URL: https://github.com/apache/flink/pull/11978#issuecomment-623121072 ## CI report: * 58447ca459535c32ed1fcd040aff23678f92aa0c Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1894) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
[ https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111697#comment-17111697 ] Echo Lee commented on FLINK-17745: -- [~kkl0u] [~fly_in_gis] So, I think you're right. I didn't think about it. Thank you! > PackagedProgram' extractedTempLibraries and jarfiles may be duplicate > - > > Key: FLINK-17745 > URL: https://issues.apache.org/jira/browse/FLINK-17745 > Project: Flink > Issue Type: Improvement > Components: Client / Job Submission >Reporter: Echo Lee >Assignee: Kostas Kloudas >Priority: Major > Labels: pull-request-available > > When i submit a flink app with a fat jar, PackagedProgram will extracted temp > libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars > contains fat jar and temp libraries. I don't think we should add fat jar to > the pipeline.jars if extractedTempLibraries is not empty. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17798) Align the behavior between the new and legacy JDBC table source
[ https://issues.apache.org/jira/browse/FLINK-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17798. --- Resolution: Fixed - master (1.12.0): 73520ca19e76d0895c38ec956250cb588eca740c - 1.11.0: b5bcb22f28ace028f824cef4512aaf90ec18b69a > Align the behavior between the new and legacy JDBC table source > --- > > Key: FLINK-17798 > URL: https://issues.apache.org/jira/browse/FLINK-17798 > Project: Flink > Issue Type: Sub-task > Components: Connectors / JDBC >Reporter: Jark Wu >Assignee: Jark Wu >Priority: Major > Fix For: 1.11.0 > > > The legacy JDBC table source, i.e. {{JdbcTableSource}}, supports projection > push down. In order to make the user experience consistent. We should align > the behavior and add tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17797) Align the behavior between the new and legacy HBase table source
[ https://issues.apache.org/jira/browse/FLINK-17797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu closed FLINK-17797. --- Resolution: Fixed Fixed in - master (1.12.0): 57e4748614bdbe06769b147bc264e4a400784379 - 1.11.0: 8fc79e674e4596be16db264b517025c26ccefcb3 > Align the behavior between the new and legacy HBase table source > > > Key: FLINK-17797 > URL: https://issues.apache.org/jira/browse/FLINK-17797 > Project: Flink > Issue Type: Sub-task > Components: Connectors / HBase >Reporter: Jark Wu >Assignee: Jark Wu >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > The legacy HBase table source, i.e. {{HBaseTableSource}}, supports projection > push down. In order to make the user experience consistent. We should align > the behavior and add tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] wuchong closed pull request #12221: [FLINK-17797][FLINK-17798][hbase][jdbc] Align the behavior between the new and legacy HBase/JDBC table source
wuchong closed pull request #12221: URL: https://github.com/apache/flink/pull/12221 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot commented on pull request #12259: [hotfix][k8s] Remove unused constant variable
flinkbot commented on pull request #12259: URL: https://github.com/apache/flink/pull/12259#issuecomment-631191345 ## CI report: * 1ee1aadd85244dccac74b71c63f21379195b112b UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12257: [FLINK-17076][docs] Revamp Kafka Connector Documentation
flinkbot edited a comment on pull request #12257: URL: https://github.com/apache/flink/pull/12257#issuecomment-631108690 ## CI report: * 8ad7645dbf1891711dd922b48673fcb7f1797271 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1891) Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2
flinkbot edited a comment on pull request #12215: URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332 ## CI report: * 5f357acabcb13d64d8e9a042af14329415db0f87 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1708) * 906be78b0943a61b70d4624b95bad5479c9f3d92 UNKNOWN Bot commands The @flinkbot bot supports the following commands: - `@flinkbot run travis` re-run the last Travis build - `@flinkbot run azure` re-run the last Azure build This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
[ https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111689#comment-17111689 ] Yang Wang commented on FLINK-17745: --- I second [~kkl0u]'s analysis. If you have some classes in the root of you fat jar, then the fat should also be added to the {{pipeline.jars}}. I do not think it is redundant. > PackagedProgram' extractedTempLibraries and jarfiles may be duplicate > - > > Key: FLINK-17745 > URL: https://issues.apache.org/jira/browse/FLINK-17745 > Project: Flink > Issue Type: Improvement > Components: Client / Job Submission >Reporter: Echo Lee >Assignee: Kostas Kloudas >Priority: Major > Labels: pull-request-available > > When i submit a flink app with a fat jar, PackagedProgram will extracted temp > libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars > contains fat jar and temp libraries. I don't think we should add fat jar to > the pipeline.jars if extractedTempLibraries is not empty. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (FLINK-17802) Set offset commit only if group id is configured for new Kafka Table source
[ https://issues.apache.org/jira/browse/FLINK-17802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jark Wu reassigned FLINK-17802: --- Assignee: Leonard Xu > Set offset commit only if group id is configured for new Kafka Table source > --- > > Key: FLINK-17802 > URL: https://issues.apache.org/jira/browse/FLINK-17802 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.11.0 >Reporter: Leonard Xu >Assignee: Leonard Xu >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > As https://issues.apache.org/jira/browse/FLINK-17619 described, > the new Kafka Table source exits same problem and should fix too. > note: this fix both for master and release-1.11 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (FLINK-17802) Set offset commit only if group id is configured for new Kafka Table source
[ https://issues.apache.org/jira/browse/FLINK-17802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111688#comment-17111688 ] Leonard Xu commented on FLINK-17802: Hi, [~gyfora] Could you have a look the two PRs ? > Set offset commit only if group id is configured for new Kafka Table source > --- > > Key: FLINK-17802 > URL: https://issues.apache.org/jira/browse/FLINK-17802 > Project: Flink > Issue Type: Bug > Components: Connectors / Kafka >Affects Versions: 1.11.0 >Reporter: Leonard Xu >Priority: Major > Labels: pull-request-available > Fix For: 1.11.0 > > > As https://issues.apache.org/jira/browse/FLINK-17619 described, > the new Kafka Table source exits same problem and should fix too. > note: this fix both for master and release-1.11 > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (FLINK-17810) Add document for K8s application mode
[ https://issues.apache.org/jira/browse/FLINK-17810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yang Wang closed FLINK-17810. - Resolution: Fixed > Add document for K8s application mode > - > > Key: FLINK-17810 > URL: https://issues.apache.org/jira/browse/FLINK-17810 > Project: Flink > Issue Type: Sub-task >Affects Versions: 1.11.0, 1.12.0 >Reporter: Yang Wang >Assignee: Yang Wang >Priority: Major > Labels: pull-request-available > > Add document for how to start/stop K8s application cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] leonardBang commented on pull request #12090: [FLINK-17622][connectors / jdbc] Remove useless switch for decimal in PostgresCatalog
leonardBang commented on pull request #12090: URL: https://github.com/apache/flink/pull/12090#issuecomment-631188347 +1 to merge cc @wuchong This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] godfreyhe commented on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result
godfreyhe commented on pull request #12199: URL: https://github.com/apache/flink/pull/12199#issuecomment-631187973 @flinkbot run azure This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-17710) StreamSqlTests.test_execute_sql test is not stable
[ https://issues.apache.org/jira/browse/FLINK-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng closed FLINK-17710. --- Fix Version/s: (was: 1.11.0) Resolution: Abandoned > StreamSqlTests.test_execute_sql test is not stable > -- > > Key: FLINK-17710 > URL: https://issues.apache.org/jira/browse/FLINK-17710 > Project: Flink > Issue Type: Bug > Components: API / Python >Reporter: Hequn Cheng >Assignee: Nicholas Jiang >Priority: Major > Labels: pull-request-available, test-stability > > Failure log: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1311/logs/144 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [flink] flinkbot commented on pull request #12259: [hotfix][k8s] Remove unused constant variable
flinkbot commented on pull request #12259: URL: https://github.com/apache/flink/pull/12259#issuecomment-631187951 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 1ee1aadd85244dccac74b71c63f21379195b112b (Wed May 20 01:59:54 UTC 2020) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (FLINK-17710) StreamSqlTests.test_execute_sql test is not stable
[ https://issues.apache.org/jira/browse/FLINK-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hequn Cheng reopened FLINK-17710: - > StreamSqlTests.test_execute_sql test is not stable > -- > > Key: FLINK-17710 > URL: https://issues.apache.org/jira/browse/FLINK-17710 > Project: Flink > Issue Type: Bug > Components: API / Python >Reporter: Hequn Cheng >Assignee: Nicholas Jiang >Priority: Major > Labels: pull-request-available, test-stability > Fix For: 1.11.0 > > > Failure log: > https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1311/logs/144 -- This message was sent by Atlassian Jira (v8.3.4#803005)