[GitHub] [flink] flinkbot commented on pull request #13113: [FLINK-18883][python] Support reduce() operation for Python KeyedStream.

2020-08-10 Thread GitBox


flinkbot commented on pull request #13113:
URL: https://github.com/apache/flink/pull/13113#issuecomment-671738552


   
   ## CI report:
   
   * eae680339ac50d4def95156c5c680628e9c16ff6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-15611) KafkaITCase.testOneToOneSources fails on Travis

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-15611.
--
Resolution: Cannot Reproduce

I'm closing this ticket due to inactivity for now.

> KafkaITCase.testOneToOneSources fails on Travis
> ---
>
> Key: FLINK-15611
> URL: https://issues.apache.org/jira/browse/FLINK-15611
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: Yangze Guo
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> {{The test KafkaITCase.testOneToOneSources failed on Travis.}}
> {code:java}
> 03:15:02,019 INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl  - 
> Deleting topic scale-down-before-first-checkpoint
> 03:15:02,037 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase  - 
> 
> Test 
> testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>  successfully run.
> 
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- Shut down KafkaTestBase 
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- KafkaTestBase finished
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25.731 [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 245.845 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 03:15:26.099 [INFO] 
> 03:15:26.099 [INFO] Results:
> 03:15:26.099 [INFO] 
> 03:15:26.099 [ERROR] Failures: 
> 03:15:26.099 [ERROR]   
> KafkaITCase.testOneToOneSources:97->KafkaConsumerTestBase.runOneToOneExactlyOnceTest:862
>  Test failed: Job execution failed.
> {code}
> https://api.travis-ci.com/v3/job/276124537/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13098: [FLINK-18866][python] Support filter() operation for Python DataStrea…

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13098:
URL: https://github.com/apache/flink/pull/13098#issuecomment-671163278


   
   ## CI report:
   
   * af3d92942065464e267e9715696ec3154ed32ee9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5378)
 
   * 19ff831a80a0ab5a1fbaabfb6d19e91f25d32314 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5382)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16956) Git fetch failed with exit code 128

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16956.
--
Fix Version/s: (was: 1.12.0)
   Resolution: Cannot Reproduce

> Git fetch failed with exit code 128
> ---
>
> Key: FLINK-16956
> URL: https://issues.apache.org/jira/browse/FLINK-16956
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7003=logs=16ccbdb7-2a3e-53da-36eb-fb718edc424a=5321d2cb-5c30-5320-9c0c-312babc023c8
> {noformat}
> fatal: could not read Username for 'https://github.com': terminal prompts 
> disabled
> ##[warning]Git fetch failed with exit code 128, back off 8.691 seconds before 
> retry.
> git -c http.extraheader="AUTHORIZATION: basic ***" fetch  --tags --prune 
> --progress --no-recurse-submodules origin
> fatal: could not read Username for 'https://github.com': terminal prompts 
> disabled
> ##[warning]Git fetch failed with exit code 128, back off 3.711 seconds before 
> retry.
> git -c http.extraheader="AUTHORIZATION: basic ***" fetch  --tags --prune 
> --progress --no-recurse-submodules origin
> fatal: could not read Username for 'https://github.com': terminal prompts 
> disabled
> ##[error]Git fetch failed with exit code: 128
> Finishing: Checkout flink-ci/flink-mirror@master to s
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16982) scala-maven-plugin JVM runs out of memory in flink-runtime

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16982.
--
Resolution: Cannot Reproduce

> scala-maven-plugin JVM runs out of memory in flink-runtime
> --
>
> Key: FLINK-16982
> URL: https://issues.apache.org/jira/browse/FLINK-16982
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7039=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=4716f636-db2d-5472-7d55-b6120a857b87
> {code}
> [INFO] Compiling 1720 source files to /__w/3/s/flink-runtime/target/classes 
> at 1585918998676
> [ERROR] Error occurred during initialization of VM
> [ERROR] java.lang.OutOfMemoryError: unable to create new native thread
> [INFO] flink-runtime .. FAILURE [ 51.533 
> s]
> Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile 
> (scala-compile-first) on project flink-runtime_2.11: wrap: 
> org.apache.commons.exec.ExecuteException: Process exited with an error: 1 
> (Exit value: 1) -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17458) TaskExecutorSubmissionTest#testFailingScheduleOrUpdateConsumers

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17458:
---
Component/s: Runtime / Coordination

> TaskExecutorSubmissionTest#testFailingScheduleOrUpdateConsumers
> ---
>
> Key: FLINK-17458
> URL: https://issues.apache.org/jira/browse/FLINK-17458
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.10.0
>Reporter: Congxian Qiu(klion26)
>Priority: Major
>  Labels: test-stability
>
> When verifying the RC of release-1.10.1, found that 
> `TaskExecutorSubmissionTest#testFailingScheduleOrUpdateConsumers` will fail 
> because of Timeout sometime. 
> I run this test locally in IDEA, found the following exception(locally in 
> only encounter 2 in 1000 times)
> {code:java}
> java.lang.InterruptedExceptionjava.lang.InterruptedException at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>  at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:212) 
> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222) at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227) at 
> scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190) at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>  at scala.concurrent.Await$.result(package.scala:190) at 
> akka.event.LoggingBus$class.akka$event$LoggingBus$$addLogger(Logging.scala:182)
>  at 
> akka.event.LoggingBus$$anonfun$4$$anonfun$apply$4.apply(Logging.scala:117) at 
> akka.event.LoggingBus$$anonfun$4$$anonfun$apply$4.apply(Logging.scala:116) at 
> scala.util.Success$$anonfun$map$1.apply(Try.scala:237) at 
> scala.util.Try$.apply(Try.scala:192) at scala.util.Success.map(Try.scala:237) 
> at akka.event.LoggingBus$$anonfun$4.apply(Logging.scala:116) at 
> akka.event.LoggingBus$$anonfun$4.apply(Logging.scala:113) at 
> scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682) at 
> akka.event.LoggingBus$class.startDefaultLoggers(Logging.scala:113) at 
> akka.event.EventStream.startDefaultLoggers(EventStream.scala:22) at 
> akka.actor.LocalActorRefProvider.init(ActorRefProvider.scala:662) at 
> akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:874) at 
> akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:870) at 
> akka.actor.ActorSystemImpl._start(ActorSystem.scala:870) at 
> akka.actor.ActorSystemImpl.start(ActorSystem.scala:891) at 
> akka.actor.RobustActorSystem$.internalApply(RobustActorSystem.scala:96) at 
> akka.actor.RobustActorSystem$.apply(RobustActorSystem.scala:70) at 
> akka.actor.RobustActorSystem$.create(RobustActorSystem.scala:55) at 
> org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:125)
>  at 
> org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:113)
>  at 
> org.apache.flink.runtime.akka.AkkaUtils$.createLocalActorSystem(AkkaUtils.scala:68)
>  at 
> org.apache.flink.runtime.akka.AkkaUtils.createLocalActorSystem(AkkaUtils.scala)
>  at 
> org.apache.flink.runtime.rpc.TestingRpcService.(TestingRpcService.java:74)
>  at 
> org.apache.flink.runtime.rpc.TestingRpcService.(TestingRpcService.java:67)
>  at 
> org.apache.flink.runtime.taskexecutor.TaskSubmissionTestEnvironment$Builder.build(TaskSubmissionTestEnvironment.java:349)
>  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorSubmissionTest.testFailingScheduleOrUpdateConsumers(TaskExecutorSubmissionTest.java:544)
>  at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> 

[GitHub] [flink] flinkbot commented on pull request #13113: [FLINK-18883][python] Support reduce() operation for Python KeyedStream.

2020-08-10 Thread GitBox


flinkbot commented on pull request #13113:
URL: https://github.com/apache/flink/pull/13113#issuecomment-671735664


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit eae680339ac50d4def95156c5c680628e9c16ff6 (Tue Aug 11 
05:33:22 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18883) Support reduce() operation for Python KeyedStream.

2020-08-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18883:
---
Labels: pull-request-available  (was: )

> Support reduce() operation for Python KeyedStream.
> --
>
> Key: FLINK-18883
> URL: https://issues.apache.org/jira/browse/FLINK-18883
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myracle commented on pull request #13051: [FLINK-18760][runtime] Redundant task managers should be released when there's no job running in session cluster

2020-08-10 Thread GitBox


Myracle commented on pull request #13051:
URL: https://github.com/apache/flink/pull/13051#issuecomment-671735098


   Thank you, @xintongsong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hequn8128 opened a new pull request #13113: [FLINK-18883][python] Support reduce() operation for Python KeyedStream.

2020-08-10 Thread GitBox


hequn8128 opened a new pull request #13113:
URL: https://github.com/apache/flink/pull/13113


   
   ## What is the purpose of the change
   
   This pull request adds reduce operation and reduce functions for Python 
DataStream API.
   
   
   ## Brief change log
   
 - Adds reduce operation on Python `KeyedStream`.
 - Adds Python ReduceFunction.
 - Add tests in `test_data_stream.py`
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Added integration tests in test_data_stream.py
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (PythonDocs)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17949) KafkaShuffleITCase.testSerDeIngestionTime:156->testRecordSerDe:388 expected:<310> but was:<0>

2020-08-10 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175253#comment-17175253
 ] 

Robert Metzger commented on FLINK-17949:


The latest failure Dian reported is on master.

> KafkaShuffleITCase.testSerDeIngestionTime:156->testRecordSerDe:388 
> expected:<310> but was:<0>
> -
>
> Key: FLINK-17949
> URL: https://issues.apache.org/jira/browse/FLINK-17949
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Attachments: logs-ci-kafkagelly-1590500380.zip, 
> logs-ci-kafkagelly-1590524911.zip
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2209=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-05-26T13:35:19.4022562Z [ERROR] 
> testSerDeIngestionTime(org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase)
>   Time elapsed: 5.786 s  <<< FAILURE!
> 2020-05-26T13:35:19.4023185Z java.lang.AssertionError: expected:<310> but 
> was:<0>
> 2020-05-26T13:35:19.4023498Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-05-26T13:35:19.4023825Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-05-26T13:35:19.4024461Z  at 
> org.junit.Assert.assertEquals(Assert.java:645)
> 2020-05-26T13:35:19.4024900Z  at 
> org.junit.Assert.assertEquals(Assert.java:631)
> 2020-05-26T13:35:19.4028546Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testRecordSerDe(KafkaShuffleITCase.java:388)
> 2020-05-26T13:35:19.4029629Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testSerDeIngestionTime(KafkaShuffleITCase.java:156)
> 2020-05-26T13:35:19.4030253Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-26T13:35:19.4030673Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-26T13:35:19.4031332Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-26T13:35:19.4031763Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-26T13:35:19.4032155Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-26T13:35:19.4032630Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-26T13:35:19.4033188Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-26T13:35:19.4033638Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-26T13:35:19.4034103Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-26T13:35:19.4034593Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-26T13:35:19.4035118Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-26T13:35:19.4035570Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-26T13:35:19.4035888Z  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13098: [FLINK-18866][python] Support filter() operation for Python DataStrea…

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13098:
URL: https://github.com/apache/flink/pull/13098#issuecomment-671163278


   
   ## CI report:
   
   * af3d92942065464e267e9715696ec3154ed32ee9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5378)
 
   * 19ff831a80a0ab5a1fbaabfb6d19e91f25d32314 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18321) AbstractCloseableRegistryTest.testClose unstable

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-18321.
--
Resolution: Duplicate

Closing. Duplicate of FLINK-18815.

> AbstractCloseableRegistryTest.testClose unstable
> 
>
> Key: FLINK-18321
> URL: https://issues.apache.org/jira/browse/FLINK-18321
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3553=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0
> {code}
> java.lang.AssertionError: expected:<0> but was:<-1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.flink.core.fs.AbstractCloseableRegistryTest.testClose(AbstractCloseableRegistryTest.java:93)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18356) Exit code 137 returned from process when testing pyflink

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-18356.
--
Resolution: Cannot Reproduce

> Exit code 137 returned from process when testing pyflink
> 
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Build System / Azure Pipelines
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18817) 'Kerberized YARN per-job on Docker test' failed

2020-08-10 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-18817.
---
Resolution: Duplicate

Have checked the 
branch([https://github.com/shuiqiangchen/flink/commits/FLINK-18763]) of the 
failed tests and confirmed that it fails without the commit 
a0227e20430ee9eaff59464023de2385378f71ea which was introduced in FLINK-18771. 
So I agree with [~karmagyz] that this problem is duplicate with FLINK-18771 and 
should have been fixed. I'm closing this ticket.

> 'Kerberized YARN per-job on Docker test' failed
> ---
>
> Key: FLINK-18817
> URL: https://issues.apache.org/jira/browse/FLINK-18817
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: Hequn Cheng
>Priority: Major
>  Labels: test-stability
>
> The end-to-end test failed due to some AccessControlException:
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/5169/logs/125
> {code}
> 2020-08-04T13:13:10.2755424Z Failing this attempt.Diagnostics: Failed on 
> local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]; Host Details : local host is: 
> "worker1.docker-hadoop-cluster-network/172.19.0.5"; destination host is: 
> "master.docker-hadoop-cluster-network":9000; 
> 2020-08-04T13:13:10.2757620Z java.io.IOException: Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host 
> is: "worker1.docker-hadoop-cluster-network/172.19.0.5"; destination host is: 
> "master.docker-hadoop-cluster-network":9000; 
> 2020-08-04T13:13:10.2758550Z  at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782)
> 2020-08-04T13:13:10.2758960Z  at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1493)
> 2020-08-04T13:13:10.2759321Z  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1435)
> 2020-08-04T13:13:10.2759676Z  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1345)
> 2020-08-04T13:13:10.2760305Z  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
> 2020-08-04T13:13:10.2760743Z  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> 2020-08-04T13:13:10.2761087Z  at com.sun.proxy.$Proxy11.getFileInfo(Unknown 
> Source)
> 2020-08-04T13:13:10.2761521Z  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796)
> 2020-08-04T13:13:10.2761964Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-08-04T13:13:10.2762310Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-08-04T13:13:10.2762741Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-08-04T13:13:10.2763105Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-08-04T13:13:10.2763503Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
> 2020-08-04T13:13:10.2763979Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
> 2020-08-04T13:13:10.2764474Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
> 2020-08-04T13:13:10.2764944Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> 2020-08-04T13:13:10.2765417Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
> 2020-08-04T13:13:10.2765770Z  at com.sun.proxy.$Proxy12.getFileInfo(Unknown 
> Source)
> 2020-08-04T13:13:10.2766093Z  at 
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1649)
> 2020-08-04T13:13:10.2766489Z  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440)
> 2020-08-04T13:13:10.2767209Z  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
> 2020-08-04T13:13:10.2767699Z  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 2020-08-04T13:13:10.2768187Z  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
> 2020-08-04T13:13:10.2768646Z  at 
> org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
> 2020-08-04T13:13:10.2769051Z  at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
> 2020-08-04T13:13:10.2769470Z  at 
> org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
> 2020-08-04T13:13:10.2769988Z  at 
> org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
> 

[GitHub] [flink] klion26 commented on a change in pull request #13089: [FLINK-18813][docs-zh] Translate the 'Local Installation' page of 'Try Flink' into Chinese

2020-08-10 Thread GitBox


klion26 commented on a change in pull request #13089:
URL: https://github.com/apache/flink/pull/13089#discussion_r468330314



##
File path: docs/try-flink/local_installation.zh.md
##
@@ -26,36 +26,35 @@ under the License.
 {% if site.version contains "SNAPSHOT" %}
 
   
-  NOTE: The Apache Flink community only publishes official builds for
-  released versions of Apache Flink.
+  注意:Apache Flink 社区只发布 Apache Flink 的 release 版本。
   
-  Since you are currently looking at the latest SNAPSHOT
-  version of the documentation, all version references below will not work.
-  Please switch the documentation to the latest released version via the 
release picker which you
-  find on the left side below the menu.
+  由于你当前正在查看文档的最新 SNAPSHOT 版本,因此以下将不会显示任何 release 版本的链接。请通过左侧菜单底部的版本选择将文档切换到最新的 
release 版本。

Review comment:
   这句话还是觉得有点不太好,现在的意思是 “不会显示  release 版本的链接”(不仅仅是链接,这句话的意思看上去更像是只有 release 
的链接不显示,其他的还是户显示的),但实际上,这个注意下面的所有内容都不会显示的。这里能否再优化一下呢?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-9657) Suspicious output from Bucketing sink E2E test on travis

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-9657:
--
Labels: test-stability  (was: )

> Suspicious output from Bucketing sink E2E test on travis
> 
>
> Key: FLINK-9657
> URL: https://issues.apache.org/jira/browse/FLINK-9657
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Tests
>Affects Versions: 1.5.0, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: zhangminglei
>Priority: Major
>  Labels: test-stability
>
> {code}
> Truncating buckets
> Truncating  to 
> dd: invalid number ‘’
> rm: missing operand
> Try 'rm --help' for more information.
> mv: missing destination file operand after ‘.truncated’
> Try 'mv --help' for more information.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-10182) AsynchronousBufferFileWriterTest deadlocks on travis

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-10182.
--
Resolution: Cannot Reproduce

Closing ticket due to inactivity.

> AsynchronousBufferFileWriterTest deadlocks on travis
> 
>
> Key: FLINK-10182
> URL: https://issues.apache.org/jira/browse/FLINK-10182
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: test-stability
>
> https://travis-ci.org/apache/flink/jobs/415811738



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-10182) AsynchronousBufferFileWriterTest deadlocks on travis

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-10182:
---
Labels: test-stability  (was: )

> AsynchronousBufferFileWriterTest deadlocks on travis
> 
>
> Key: FLINK-10182
> URL: https://issues.apache.org/jira/browse/FLINK-10182
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network, Tests
>Affects Versions: 1.7.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: test-stability
>
> https://travis-ci.org/apache/flink/jobs/415811738



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-4295) CoGroupGroupSortTranslationTest#testGroupSortTuplesDefaultCoGroup fails to run

2020-08-10 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175240#comment-17175240
 ] 

Robert Metzger commented on FLINK-4295:
---

The test is ignored, however, it passes when I run it. Shall we re-enable the 
test and close this ticket?

> CoGroupGroupSortTranslationTest#testGroupSortTuplesDefaultCoGroup fails to run
> --
>
> Key: FLINK-4295
> URL: https://issues.apache.org/jira/browse/FLINK-4295
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet, Tests
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: test-stability
>
> When running the test you always get an exception;
> {code}
> 08/01/2016 15:03:34   Job execution switched to status RUNNING.
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> SCHEDULED 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> DEPLOYING 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> SCHEDULED 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> DEPLOYING 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> RUNNING 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(1/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(3/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(3/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(1/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(4/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(4/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(2/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(2/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> FINISHED 
> 08/01/2016 15:03:35   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> FINISHED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(3/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(1/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(2/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(4/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   DataSink (collect())(4/4) switched to SCHEDULED 
> 08/01/2016 15:03:35   DataSink 

[GitHub] [flink] flinkbot edited a comment on pull request #13098: [FLINK-18866][python] Support filter() operation for Python DataStrea…

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13098:
URL: https://github.com/apache/flink/pull/13098#issuecomment-671163278


   
   ## CI report:
   
   * af3d92942065464e267e9715696ec3154ed32ee9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5378)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13090: [FLINK-18844][json][maxwell] Support maxwell-json format to read Maxwell changelogs

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13090:
URL: https://github.com/apache/flink/pull/13090#issuecomment-670818966


   
   ## CI report:
   
   * d2a7bde48d5c4e4106e9f889e0a3729c5a71c716 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5379)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-4295) CoGroupGroupSortTranslationTest#testGroupSortTuplesDefaultCoGroup fails to run

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-4295:
--
Labels: test-stability  (was: )

> CoGroupGroupSortTranslationTest#testGroupSortTuplesDefaultCoGroup fails to run
> --
>
> Key: FLINK-4295
> URL: https://issues.apache.org/jira/browse/FLINK-4295
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet, Tests
>Affects Versions: 1.1.0
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: test-stability
>
> When running the test you always get an exception;
> {code}
> 08/01/2016 15:03:34   Job execution switched to status RUNNING.
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> SCHEDULED 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> DEPLOYING 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> SCHEDULED 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> DEPLOYING 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> RUNNING 
> 08/01/2016 15:03:34   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(1/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(3/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(3/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(1/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(4/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(4/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(2/4)
>  switched to SCHEDULED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(2/4)
>  switched to DEPLOYING 
> 08/01/2016 15:03:35   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> FINISHED 
> 08/01/2016 15:03:35   DataSource (at 
> org.apache.flink.api.scala.ExecutionEnvironment.fromElements(ExecutionEnvironment.scala:566)
>  (org.apache.flink.api.java.io.CollectionInputFormat))(1/1) switched to 
> FINISHED 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(3/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(1/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(2/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   CoGroup (CoGroup at 
> org.apache.flink.api.scala.UnfinishedCoGroupOperation.finish(UnfinishedCoGroupOperation.scala:46))(4/4)
>  switched to RUNNING 
> 08/01/2016 15:03:35   DataSink (collect())(4/4) switched to SCHEDULED 
> 08/01/2016 15:03:35   DataSink (collect())(1/4) switched to SCHEDULED 
> 08/01/2016 15:03:35   DataSink (collect())(4/4) switched to DEPLOYING 
> 08/01/2016 

[jira] [Closed] (FLINK-12542) Kafka011ITCase#testAutoOffsetRetrievalAndCommitToKafka test failed

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-12542.
--
Resolution: Cannot Reproduce

Closing ticket due to inactivity.

> Kafka011ITCase#testAutoOffsetRetrievalAndCommitToKafka test failed
> --
>
> Key: FLINK-12542
> URL: https://issues.apache.org/jira/browse/FLINK-12542
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: vinoyang
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 03:06:51.127 [ERROR] Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 132.873 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.Kafka011ITCase
> 03:06:51.128 [ERROR] 
> testAutoOffsetRetrievalAndCommitToKafka(org.apache.flink.streaming.connectors.kafka.Kafka011ITCase)
>   Time elapsed: 30.699 s  <<< FAILURE!
> java.lang.AssertionError: expected:<50> but was:
>   at 
> org.apache.flink.streaming.connectors.kafka.Kafka011ITCase.testAutoOffsetRetrievalAndCommitToKafka(Kafka011ITCase.java:175)
> {code}
> error detail:
> {code:java}
> Test 
> testAutoOffsetRetrievalAndCommitToKafka(org.apache.flink.streaming.connectors.kafka.Kafka011ITCase)
>  failed with:
> java.lang.AssertionError: expected:<50> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runAutoOffsetRetrievalAndCommitToKafka(KafkaConsumerTestBase.java:352)
>   at 
> org.apache.flink.streaming.connectors.kafka.Kafka011ITCase.testAutoOffsetRetrievalAndCommitToKafka(Kafka011ITCase.java:175)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> log detail: [https://api.travis-ci.org/v3/job/533598881/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12196) FlinkKafkaProducer011ITCase#testRunOutOfProducersInThePool throws NPE

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-12196.
--
Resolution: Cannot Reproduce

Closing ticket due to inactivity.

> FlinkKafkaProducer011ITCase#testRunOutOfProducersInThePool throws NPE
> -
>
> Key: FLINK-12196
> URL: https://issues.apache.org/jira/browse/FLINK-12196
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.8.0
>Reporter: vinoyang
>Assignee: chauncy
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 09:25:28.562 [ERROR] 
> testRunOutOfProducersInThePool(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 65.393 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testRunOutOfProducersInThePool(FlinkKafkaProducerITCase.java:514)
> {code}
> log detail : [https://api.travis-ci.org/v3/job/520162132/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12196) FlinkKafkaProducer011ITCase#testRunOutOfProducersInThePool throws NPE

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-12196:
---
Labels: test-stability  (was: )

> FlinkKafkaProducer011ITCase#testRunOutOfProducersInThePool throws NPE
> -
>
> Key: FLINK-12196
> URL: https://issues.apache.org/jira/browse/FLINK-12196
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.8.0
>Reporter: vinoyang
>Assignee: chauncy
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 09:25:28.562 [ERROR] 
> testRunOutOfProducersInThePool(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 65.393 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase.testRunOutOfProducersInThePool(FlinkKafkaProducerITCase.java:514)
> {code}
> log detail : [https://api.travis-ci.org/v3/job/520162132/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12244) SynchronousCheckpointITCase hang up

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-12244.
--
Resolution: Cannot Reproduce

Closing ticket due to inactivity.

> SynchronousCheckpointITCase hang up
> ---
>
> Key: FLINK-12244
> URL: https://issues.apache.org/jira/browse/FLINK-12244
> Project: Flink
>  Issue Type: Test
>  Components: Runtime / Checkpointing, Tests
>Reporter: vinoyang
>Priority: Major
>  Labels: test-stability
>
> log details : [https://api.travis-ci.org/v3/job/521241815/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12244) SynchronousCheckpointITCase hang up

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-12244:
---
Labels: test-stability  (was: )

> SynchronousCheckpointITCase hang up
> ---
>
> Key: FLINK-12244
> URL: https://issues.apache.org/jira/browse/FLINK-12244
> Project: Flink
>  Issue Type: Test
>  Components: Runtime / Checkpointing, Tests
>Reporter: vinoyang
>Priority: Major
>  Labels: test-stability
>
> log details : [https://api.travis-ci.org/v3/job/521241815/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12542) Kafka011ITCase#testAutoOffsetRetrievalAndCommitToKafka test failed

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-12542:
---
Labels: test-stability  (was: )

> Kafka011ITCase#testAutoOffsetRetrievalAndCommitToKafka test failed
> --
>
> Key: FLINK-12542
> URL: https://issues.apache.org/jira/browse/FLINK-12542
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: vinoyang
>Priority: Major
>  Labels: test-stability
>
> {code:java}
> 03:06:51.127 [ERROR] Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 132.873 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.Kafka011ITCase
> 03:06:51.128 [ERROR] 
> testAutoOffsetRetrievalAndCommitToKafka(org.apache.flink.streaming.connectors.kafka.Kafka011ITCase)
>   Time elapsed: 30.699 s  <<< FAILURE!
> java.lang.AssertionError: expected:<50> but was:
>   at 
> org.apache.flink.streaming.connectors.kafka.Kafka011ITCase.testAutoOffsetRetrievalAndCommitToKafka(Kafka011ITCase.java:175)
> {code}
> error detail:
> {code:java}
> Test 
> testAutoOffsetRetrievalAndCommitToKafka(org.apache.flink.streaming.connectors.kafka.Kafka011ITCase)
>  failed with:
> java.lang.AssertionError: expected:<50> but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runAutoOffsetRetrievalAndCommitToKafka(KafkaConsumerTestBase.java:352)
>   at 
> org.apache.flink.streaming.connectors.kafka.Kafka011ITCase.testAutoOffsetRetrievalAndCommitToKafka(Kafka011ITCase.java:175)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> log detail: [https://api.travis-ci.org/v3/job/533598881/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12773) Unstable kafka e2e test

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-12773:
---
Labels: test-stability  (was: )

> Unstable kafka e2e test
> ---
>
> Key: FLINK-12773
> URL: https://issues.apache.org/jira/browse/FLINK-12773
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.9.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> 'Kafka 0.10 end-to-end test' fails on Travis occasionally because of 
> corrupted downloaded kafka archive.
> https://api.travis-ci.org/v3/job/542507472/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-12773) Unstable kafka e2e test

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-12773.
--
Resolution: Cannot Reproduce

Closing ticket due to inactivity.

> Unstable kafka e2e test
> ---
>
> Key: FLINK-12773
> URL: https://issues.apache.org/jira/browse/FLINK-12773
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.9.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> 'Kafka 0.10 end-to-end test' fails on Travis occasionally because of 
> corrupted downloaded kafka archive.
> https://api.travis-ci.org/v3/job/542507472/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14348) YarnFileStageTestS3ITCase. testRecursiveUploadForYarnS3a fails to delete files

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-14348:
---
Labels: test-stability  (was: )

> YarnFileStageTestS3ITCase. testRecursiveUploadForYarnS3a fails to delete files
> --
>
> Key: FLINK-14348
> URL: https://issues.apache.org/jira/browse/FLINK-14348
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Deployment / YARN, Tests
>Affects Versions: 1.9.0
>Reporter: Caizhi Weng
>Priority: Major
>  Labels: test-stability
>
> YarnFileStageTestS3ITCase. testRecursiveUploadForYarnS3a fails with the 
> following exceptions:
> {code:java}
> 15:25:07.359 [ERROR] 
> testRecursiveUploadForYarnS3a(org.apache.flink.yarn.YarnFileStageTestS3ITCase)
>   Time elapsed: 10.808 s  <<< 
> ERROR!24649org.apache.hadoop.fs.s3a.AWSS3IOException: delete on 
> s3a://[secure]/temp/tests-3565b11f-e9be-4213-a98d-0f0ecd123783/testYarn-s3a: 
> com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more 
> objects could not be deleted (Service: null; Status Code: 200; Error Code: 
> null; Request ID: 2D1AE3D999528C34; S3 Extended Request ID: 
> zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=),
>  S3 Extended Request ID: 
> zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=: 
> One or more objects could not be deleted (Service: null; Status Code: 200; 
> Error Code: null; Request ID: 2D1AE3D999528C34; S3 Extended Request ID: 
> zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=)24650
>at 
> org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarn(YarnFileStageTestS3ITCase.java:159)24651
>   at 
> org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3a(YarnFileStageTestS3ITCase.java:190)24652Caused
>  by: com.amazonaws.services.s3.model.MultiObjectDeleteException: One or more 
> objects could not be deleted (Service: null; Status Code: 200; Error Code: 
> null; Request ID: 2D1AE3D999528C34; S3 Extended Request ID: 
> zIX1QsAcsY1ZYSDOeCaYsGJ4bz0NJTy2kw0EYmlJr8Kb7pM8OPmhAKO5XHI26xiOi2tIkTIoBwg=)24653
>at 
> org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarn(YarnFileStageTestS3ITCase.java:159)24654
>   at 
> org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3a(YarnFileStageTestS3ITCase.java:190){code}
> Travis log: [https://travis-ci.org/apache/flink/jobs/595082651]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15315) Add test case for rest

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-15315.
--
Resolution: Cannot Reproduce

Closing this ticket due to inactivity. Please reopen if there are more details.

> Add test case for rest
> --
>
> Key: FLINK-15315
> URL: https://issues.apache.org/jira/browse/FLINK-15315
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Tests
>Reporter: lining
>Priority: Major
>
> 1. Handler which has no test:
>  * ClusterConfigHandler
>  * ClusterOverviewHandler
>  * DashboardConfigHandler
>  * ShutdownHandler
>  * CheckpointConfigHandler
>  * CheckpointingStatisticsHandler
>  * CheckpointStatisticDetailsHandler
>  * TaskCheckpointStatisticDetailsHandler
>  * RescalingHandlers
>  * JobAccumulatorsHandler
>  * JobDetailsHandler
>  * JobIdsHandler
>  * JobPlanHandler
>  * JobsOverviewHandler
>  * JobVertexAccumulatorsHandler
>  * JobVertexDetailsHandler
>  * JobVertexTaskManagersHandler
>  * SubtasksAllAccumulatorsHandler
>  * SubtasksTimesHandler
>  * TaskManagerDetailsHandler
>  * TaskManagerLogFileHandler
>  * TaskManagersHandler
>  * TaskManagerStdoutFileHandler
> 2. Some rest server's handlers' data comes from runtime metrics. Currently, 
> if the runtime metrics have updated, these handlers will not be aware of 
> these changes. How can we limit this update error by the test? Such as [input 
> group and output group of the task metric are 
> reversed|https://issues.apache.org/jira/browse/FLINK-15063].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15611) KafkaITCase.testOneToOneSources fails on Travis

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-15611:
---
Labels: test-stability  (was: )

> KafkaITCase.testOneToOneSources fails on Travis
> ---
>
> Key: FLINK-15611
> URL: https://issues.apache.org/jira/browse/FLINK-15611
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Reporter: Yangze Guo
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> {{The test KafkaITCase.testOneToOneSources failed on Travis.}}
> {code:java}
> 03:15:02,019 INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl  - 
> Deleting topic scale-down-before-first-checkpoint
> 03:15:02,037 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase  - 
> 
> Test 
> testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>  successfully run.
> 
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- Shut down KafkaTestBase 
> 03:15:02,038 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- KafkaTestBase finished
> 03:15:25,728 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- -
> 03:15:25.731 [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 245.845 s - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase
> 03:15:26.099 [INFO] 
> 03:15:26.099 [INFO] Results:
> 03:15:26.099 [INFO] 
> 03:15:26.099 [ERROR] Failures: 
> 03:15:26.099 [ERROR]   
> KafkaITCase.testOneToOneSources:97->KafkaConsumerTestBase.runOneToOneExactlyOnceTest:862
>  Test failed: Job execution failed.
> {code}
> https://api.travis-ci.com/v3/job/276124537/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15451) TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure failed on azure

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-15451:
---
Labels: test-stability  (was: )

> TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure 
> failed on azure
> --
>
> Key: FLINK-15451
> URL: https://issues.apache.org/jira/browse/FLINK-15451
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.9.1
>Reporter: Congxian Qiu(klion26)
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> 2019-12-31T02:43:39.4766254Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 42.801 s <<< FAILURE! - in 
> org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase 
> 2019-12-31T02:43:39.4768373Z [ERROR] 
> testTaskManagerProcessFailure[0](org.apache.flink.test.recovery.TaskManagerProcessFailureBatchRecoveryITCase)
>  Time elapsed: 2.699 s <<< ERROR! 2019-12-31T02:43:39.4768834Z 
> java.net.BindException: Address already in use 2019-12-31T02:43:39.4769096Z
>  
>  
> [https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/3995/logs/15]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15745) KafkaITCase.testKeyValueSupport failure due to assertion error.

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-15745:
---
Labels: test-stability  (was: )

> KafkaITCase.testKeyValueSupport failure due to assertion error.
> ---
>
> Key: FLINK-15745
> URL: https://issues.apache.org/jira/browse/FLINK-15745
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> The failure cause was:
> {code:java}
> Caused by: java.lang.AssertionError: Wrong value 50
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$15.flatMap(KafkaConsumerTestBase.java:1411)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$15.flatMap(KafkaConsumerTestBase.java:1406)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:50)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:641)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:616)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:596)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
>   at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.emitRecord(KafkaFetcher.java:185)
>   at 
> org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:150)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:715)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17379) testScaleDownBeforeFirstCheckpoint Error reading field 'api_versions': Error reading field 'max_version':

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17379:
---
Labels: test-stability  (was: )

> testScaleDownBeforeFirstCheckpoint Error reading field 'api_versions': Error 
> reading field 'max_version': 
> --
>
> Key: FLINK-17379
> URL: https://issues.apache.org/jira/browse/FLINK-17379
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=145=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> {code}
> 2020-04-23T19:46:02.5792736Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 141.725 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011ITCase
> 2020-04-23T19:46:02.5794014Z [ERROR] 
> testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011ITCase)
>   Time elapsed: 15.209 s  <<< ERROR!
> 2020-04-23T19:46:02.5795134Z 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'api_versions': Error reading field 'max_version': 
> java.nio.BufferUnderflowException
> 2020-04-23T19:46:02.5795722Z  at 
> org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
> 2020-04-23T19:46:02.5796132Z  at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
> 2020-04-23T19:46:02.5796569Z  at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
> 2020-04-23T19:46:02.5797065Z  at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
> 2020-04-23T19:46:02.5797567Z  at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
> 2020-04-23T19:46:02.5798020Z  at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
> 2020-04-23T19:46:02.5798486Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
> 2020-04-23T19:46:02.5799025Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
> 2020-04-23T19:46:02.5799621Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:199)
> 2020-04-23T19:46:02.5800166Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
> 2020-04-23T19:46:02.5800718Z  at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:309)
> 2020-04-23T19:46:02.5801267Z  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
> 2020-04-23T19:46:02.5801726Z  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
> 2020-04-23T19:46:02.5802263Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.getAllRecordsFromTopic(KafkaTestEnvironmentImpl.java:257)
> 2020-04-23T19:46:02.5802990Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertExactlyOnceForTopic(KafkaTestBase.java:274)
> 2020-04-23T19:46:02.5803551Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertExactlyOnceForTopic(KafkaTestBase.java:248)
> 2020-04-23T19:46:02.5804170Z  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011ITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducer011ITCase.java:366)
> 2020-04-23T19:46:02.5804804Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-04-23T19:46:02.5805244Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-04-23T19:46:02.5805711Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-04-23T19:46:02.5806109Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-04-23T19:46:02.5806516Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-04-23T19:46:02.5806977Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-04-23T19:46:02.5807443Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-04-23T19:46:02.5807908Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-04-23T19:46:02.5808346Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-04-23T19:46:02.5808757Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 

[jira] [Updated] (FLINK-15619) GroupWindowTableAggregateITCase.testAllProcessingTimeTumblingGroupWindowOverCount failed on Azure

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-15619:
---
Labels: test-stability  (was: )

> GroupWindowTableAggregateITCase.testAllProcessingTimeTumblingGroupWindowOverCount
>  failed  on Azure
> --
>
> Key: FLINK-15619
> URL: https://issues.apache.org/jira/browse/FLINK-15619
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.10.0
>Reporter: Congxian Qiu(klion26)
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> {code}
> 01-16T08:32:11.0214825Z [ERROR] 
> testAllProcessingTimeTumblingGroupWindowOverCount[StateBackend=HEAP](org.apache.flink.table.planner.runtime.stream.table.GroupWindowTableAggregateITCase)
>  Time elapsed: 2.213 s <<< ERROR! 2020-01-16T08:32:11.0223298Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 
> 2020-01-16T08:32:11.0241857Z at 
> org.apache.flink.table.planner.runtime.stream.table.GroupWindowTableAggregateITCase.testAllProcessingTimeTumblingGroupWindowOverCount(GroupWindowTableAggregateITCase.scala:130)
>  2020-01-16T08:32:11.0261909Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=1, 
> backoffTimeMS=0) 2020-01-16T08:32:11.0274347Z Caused by: java.lang.Exception: 
> Artificial Failure 2020-01-16T08:32:11.0291664Z
> {code}
>  
> [https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/4391/logs/16]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17482) KafkaITCase.testMultipleSourcesOnePartition unstable

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17482:
---
Labels: test-stability  (was: )

> KafkaITCase.testMultipleSourcesOnePartition unstable
> 
>
> Key: FLINK-17482
> URL: https://issues.apache.org/jira/browse/FLINK-17482
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI run: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=454=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 07:29:40,472 [main] INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase[] - 
> -
> [ERROR] Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 152.018 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase
> [ERROR] 
> testMultipleSourcesOnePartition(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>   Time elapsed: 4.257 s  <<< FAILURE!
> java.lang.AssertionError: Test failed: Job execution failed.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:45)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest(KafkaConsumerTestBase.java:963)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMultipleSourcesOnePartition(KafkaITCase.java:107)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16298) GroupWindowTableAggregateITCase.testEventTimeTumblingWindow fails on Travis

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-16298:
---
Labels: test-stability  (was: )

> GroupWindowTableAggregateITCase.testEventTimeTumblingWindow fails on Travis
> ---
>
> Key: FLINK-16298
> URL: https://issues.apache.org/jira/browse/FLINK-16298
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Reporter: Yingjie Cao
>Priority: Major
>  Labels: test-stability
>
> GroupWindowTableAggregateITCase.testEventTimeTumblingWindow fails on Travis. 
> link: [https://api.travis-ci.com/v3/job/291610383/log.txt]
> stack:
> {code:java}
> 05:38:01.976 [ERROR] Tests run: 18, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 7.537 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.stream.table.GroupWindowTableAggregateITCase
> 05:38:01.976 [ERROR] 
> testEventTimeTumblingWindow[StateBackend=HEAP](org.apache.flink.table.planner.runtime.stream.table.GroupWindowTableAggregateITCase)
>   Time elapsed: 0.459 s  <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.table.planner.runtime.stream.table.GroupWindowTableAggregateITCase.testEventTimeTumblingWindow(GroupWindowTableAggregateITCase.scala:151)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> FixedDelayRestartBackoffTimeStrategy(maxNumberRestartAttempts=1, 
> backoffTimeMS=0)
> Caused by: java.lang.Exception: Artificial Failure
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16368) ZooKeeperHighAvailabilityITCase test fail

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-16368:
---
Labels: test-stability  (was: )

> ZooKeeperHighAvailabilityITCase test fail
> -
>
> Key: FLINK-16368
> URL: https://issues.apache.org/jira/browse/FLINK-16368
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Jingsong Lee
>Priority: Major
>  Labels: test-stability
>
> ZooKeeperHighAvailabilityITCase java.net.BindException: Address already in 
> use and 
> NullPointerException.
> [https://api.travis-ci.com/v3/job/292860294/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17458) TaskExecutorSubmissionTest#testFailingScheduleOrUpdateConsumers

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17458:
---
Labels: test-stability  (was: )

> TaskExecutorSubmissionTest#testFailingScheduleOrUpdateConsumers
> ---
>
> Key: FLINK-17458
> URL: https://issues.apache.org/jira/browse/FLINK-17458
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Congxian Qiu(klion26)
>Priority: Major
>  Labels: test-stability
>
> When verifying the RC of release-1.10.1, found that 
> `TaskExecutorSubmissionTest#testFailingScheduleOrUpdateConsumers` will fail 
> because of Timeout sometime. 
> I run this test locally in IDEA, found the following exception(locally in 
> only encounter 2 in 1000 times)
> {code:java}
> java.lang.InterruptedExceptionjava.lang.InterruptedException at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>  at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:212) 
> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222) at 
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227) at 
> scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190) at 
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>  at scala.concurrent.Await$.result(package.scala:190) at 
> akka.event.LoggingBus$class.akka$event$LoggingBus$$addLogger(Logging.scala:182)
>  at 
> akka.event.LoggingBus$$anonfun$4$$anonfun$apply$4.apply(Logging.scala:117) at 
> akka.event.LoggingBus$$anonfun$4$$anonfun$apply$4.apply(Logging.scala:116) at 
> scala.util.Success$$anonfun$map$1.apply(Try.scala:237) at 
> scala.util.Try$.apply(Try.scala:192) at scala.util.Success.map(Try.scala:237) 
> at akka.event.LoggingBus$$anonfun$4.apply(Logging.scala:116) at 
> akka.event.LoggingBus$$anonfun$4.apply(Logging.scala:113) at 
> scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:683)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:682) at 
> akka.event.LoggingBus$class.startDefaultLoggers(Logging.scala:113) at 
> akka.event.EventStream.startDefaultLoggers(EventStream.scala:22) at 
> akka.actor.LocalActorRefProvider.init(ActorRefProvider.scala:662) at 
> akka.actor.ActorSystemImpl.liftedTree2$1(ActorSystem.scala:874) at 
> akka.actor.ActorSystemImpl._start$lzycompute(ActorSystem.scala:870) at 
> akka.actor.ActorSystemImpl._start(ActorSystem.scala:870) at 
> akka.actor.ActorSystemImpl.start(ActorSystem.scala:891) at 
> akka.actor.RobustActorSystem$.internalApply(RobustActorSystem.scala:96) at 
> akka.actor.RobustActorSystem$.apply(RobustActorSystem.scala:70) at 
> akka.actor.RobustActorSystem$.create(RobustActorSystem.scala:55) at 
> org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:125)
>  at 
> org.apache.flink.runtime.akka.AkkaUtils$.createActorSystem(AkkaUtils.scala:113)
>  at 
> org.apache.flink.runtime.akka.AkkaUtils$.createLocalActorSystem(AkkaUtils.scala:68)
>  at 
> org.apache.flink.runtime.akka.AkkaUtils.createLocalActorSystem(AkkaUtils.scala)
>  at 
> org.apache.flink.runtime.rpc.TestingRpcService.(TestingRpcService.java:74)
>  at 
> org.apache.flink.runtime.rpc.TestingRpcService.(TestingRpcService.java:67)
>  at 
> org.apache.flink.runtime.taskexecutor.TaskSubmissionTestEnvironment$Builder.build(TaskSubmissionTestEnvironment.java:349)
>  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorSubmissionTest.testFailingScheduleOrUpdateConsumers(TaskExecutorSubmissionTest.java:544)
>  at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> 

[jira] [Updated] (FLINK-18174) EventTimeWindowCheckpointingITCase crashes with exit code 127

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-18174:
---
Component/s: API / DataStream

> EventTimeWindowCheckpointingITCase crashes with exit code 127
> -
>
> Key: FLINK-18174
> URL: https://issues.apache.org/jira/browse/FLINK-18174
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2882=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-06-07T21:27:27.5645995Z [INFO] 
> 
> 2020-06-07T21:27:27.5646433Z [INFO] BUILD FAILURE
> 2020-06-07T21:27:27.5646928Z [INFO] 
> 
> 2020-06-07T21:27:27.5647248Z [INFO] Total time: 13:56 min
> 2020-06-07T21:27:27.5647818Z [INFO] Finished at: 2020-06-07T21:27:27+00:00
> 2020-06-07T21:27:28.1548022Z [INFO] Final Memory: 143M/3643M
> 2020-06-07T21:27:28.1549222Z [INFO] 
> 
> 2020-06-07T21:27:28.1550001Z [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2020-06-07T21:27:28.1633207Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-tests: There are test failures.
> 2020-06-07T21:27:28.1634081Z [ERROR] 
> 2020-06-07T21:27:28.1634607Z [ERROR] Please refer to 
> /__w/2/s/flink-tests/target/surefire-reports for the individual test results.
> 2020-06-07T21:27:28.1635808Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-06-07T21:27:28.1636306Z [ERROR] ExecutionException The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-06-07T21:27:28.1637602Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
> -Xms256m -Xmx2048m -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /__w/2/s/flink-tests/target/surefire/surefirebooter5508139442489684354.jar 
> /__w/2/s/flink-tests/target/surefire 2020-06-07T21-13-41_745-jvmRun1 
> surefire2128152879875854938tmp surefire_200576917820794528868tmp
> 2020-06-07T21:27:28.1638423Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-07T21:27:28.1638766Z [ERROR] Process Exit Code: 127
> 2020-06-07T21:27:28.1638995Z [ERROR] Crashed tests:
> 2020-06-07T21:27:28.1639297Z [ERROR] 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase
> 2020-06-07T21:27:28.1640007Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 2020-06-07T21:27:28.1641432Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
> -Xms256m -Xmx2048m -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /__w/2/s/flink-tests/target/surefire/surefirebooter5508139442489684354.jar 
> /__w/2/s/flink-tests/target/surefire 2020-06-07T21-13-41_745-jvmRun1 
> surefire2128152879875854938tmp surefire_200576917820794528868tmp
> 2020-06-07T21:27:28.1645745Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-07T21:27:28.1646464Z [ERROR] Process Exit Code: 127
> 2020-06-07T21:27:28.1646902Z [ERROR] Crashed tests:
> 2020-06-07T21:27:28.1647394Z [ERROR] 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase
> 2020-06-07T21:27:28.1648133Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 2020-06-07T21:27:28.1648856Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
> 2020-06-07T21:27:28.1649769Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
> 2020-06-07T21:27:28.1650587Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 2020-06-07T21:27:28.1651376Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-06-07T21:27:28.1652213Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-06-07T21:27:28.1652986Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 

[jira] [Updated] (FLINK-17510) StreamingKafkaITCase. testKafka timeouts on downloading Kafka

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17510:
---
Labels: test-stability  (was: )

> StreamingKafkaITCase. testKafka timeouts on downloading Kafka
> -
>
> Key: FLINK-17510
> URL: https://issues.apache.org/jira/browse/FLINK-17510
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Connectors / Kafka, Tests
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=585=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}
> 2020-05-05T00:06:49.7268716Z [INFO] 
> ---
> 2020-05-05T00:06:49.7268938Z [INFO]  T E S T S
> 2020-05-05T00:06:49.7269282Z [INFO] 
> ---
> 2020-05-05T00:06:50.5336315Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8603439Z [ERROR] Tests run: 3, Failures: 0, Errors: 2, 
> Skipped: 0, Time elapsed: 276.323 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-05-05T00:11:26.8604882Z [ERROR] testKafka[1: 
> kafka-version:0.11.0.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 120.024 s  <<< ERROR!
> 2020-05-05T00:11:26.8605942Z java.io.IOException: Process ([wget, -q, -P, 
> /tmp/junit2815750531595874769/downloads/1290570732, 
> https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz]) 
> exceeded timeout (12) or number of retries (3).
> 2020-05-05T00:11:26.8606732Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132)
> 2020-05-05T00:11:26.8607321Z  at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127)
> 2020-05-05T00:11:26.8607826Z  at 
> org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31)
> 2020-05-05T00:11:26.8608343Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98)
> 2020-05-05T00:11:26.8608892Z  at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92)
> 2020-05-05T00:11:26.8609602Z  at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46)
> 2020-05-05T00:11:26.8610026Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-05T00:11:26.8610553Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-05T00:11:26.8610958Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-05T00:11:26.8611388Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-05T00:11:26.8612214Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-05T00:11:26.8612706Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8613109Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8613551Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8614019Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8614442Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8614869Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8615251Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8615654Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-05T00:11:26.8616060Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8616465Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-05T00:11:26.8616893Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-05T00:11:26.8617893Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-05T00:11:26.8618490Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-05T00:11:26.8619056Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-05T00:11:26.8619589Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-05T00:11:26.8620073Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-05T00:11:26.8620745Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-05T00:11:26.8621172Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 

[jira] [Updated] (FLINK-18174) EventTimeWindowCheckpointingITCase crashes with exit code 127

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-18174:
---
Labels: test-stability  (was: )

> EventTimeWindowCheckpointingITCase crashes with exit code 127
> -
>
> Key: FLINK-18174
> URL: https://issues.apache.org/jira/browse/FLINK-18174
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2882=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-06-07T21:27:27.5645995Z [INFO] 
> 
> 2020-06-07T21:27:27.5646433Z [INFO] BUILD FAILURE
> 2020-06-07T21:27:27.5646928Z [INFO] 
> 
> 2020-06-07T21:27:27.5647248Z [INFO] Total time: 13:56 min
> 2020-06-07T21:27:27.5647818Z [INFO] Finished at: 2020-06-07T21:27:27+00:00
> 2020-06-07T21:27:28.1548022Z [INFO] Final Memory: 143M/3643M
> 2020-06-07T21:27:28.1549222Z [INFO] 
> 
> 2020-06-07T21:27:28.1550001Z [WARNING] The requested profile 
> "skip-webui-build" could not be activated because it does not exist.
> 2020-06-07T21:27:28.1633207Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test 
> (integration-tests) on project flink-tests: There are test failures.
> 2020-06-07T21:27:28.1634081Z [ERROR] 
> 2020-06-07T21:27:28.1634607Z [ERROR] Please refer to 
> /__w/2/s/flink-tests/target/surefire-reports for the individual test results.
> 2020-06-07T21:27:28.1635808Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-06-07T21:27:28.1636306Z [ERROR] ExecutionException The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-06-07T21:27:28.1637602Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
> -Xms256m -Xmx2048m -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /__w/2/s/flink-tests/target/surefire/surefirebooter5508139442489684354.jar 
> /__w/2/s/flink-tests/target/surefire 2020-06-07T21-13-41_745-jvmRun1 
> surefire2128152879875854938tmp surefire_200576917820794528868tmp
> 2020-06-07T21:27:28.1638423Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-07T21:27:28.1638766Z [ERROR] Process Exit Code: 127
> 2020-06-07T21:27:28.1638995Z [ERROR] Crashed tests:
> 2020-06-07T21:27:28.1639297Z [ERROR] 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase
> 2020-06-07T21:27:28.1640007Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> 2020-06-07T21:27:28.1641432Z [ERROR] Command was /bin/sh -c cd 
> /__w/2/s/flink-tests/target && /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 
> -Xms256m -Xmx2048m -Dmvn.forkNumber=1 -XX:+UseG1GC -jar 
> /__w/2/s/flink-tests/target/surefire/surefirebooter5508139442489684354.jar 
> /__w/2/s/flink-tests/target/surefire 2020-06-07T21-13-41_745-jvmRun1 
> surefire2128152879875854938tmp surefire_200576917820794528868tmp
> 2020-06-07T21:27:28.1645745Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-06-07T21:27:28.1646464Z [ERROR] Process Exit Code: 127
> 2020-06-07T21:27:28.1646902Z [ERROR] Crashed tests:
> 2020-06-07T21:27:28.1647394Z [ERROR] 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase
> 2020-06-07T21:27:28.1648133Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.awaitResultsDone(ForkStarter.java:510)
> 2020-06-07T21:27:28.1648856Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.runSuitesForkPerTestSet(ForkStarter.java:457)
> 2020-06-07T21:27:28.1649769Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:298)
> 2020-06-07T21:27:28.1650587Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:246)
> 2020-06-07T21:27:28.1651376Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-06-07T21:27:28.1652213Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-06-07T21:27:28.1652986Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 

[jira] [Updated] (FLINK-18476) PythonEnvUtilsTest#testStartPythonProcess fails locally

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-18476:
---
Labels: test-stability  (was: )

> PythonEnvUtilsTest#testStartPythonProcess fails locally 
> 
>
> Key: FLINK-18476
> URL: https://issues.apache.org/jira/browse/FLINK-18476
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> The 
> {{org.apache.flink.client.python.PythonEnvUtilsTest#testStartPythonProcess}} 
> failed in my local environment as it assumes the environment has 
> {{/usr/bin/python}}. 
> I don't know exactly how did I get python in Ubuntu 20.04, but I have only 
> alias for {{python = python3}}. Therefore the tests fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18817) 'Kerberized YARN per-job on Docker test' failed

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-18817:
---
Labels: test-stability  (was: )

> 'Kerberized YARN per-job on Docker test' failed
> ---
>
> Key: FLINK-18817
> URL: https://issues.apache.org/jira/browse/FLINK-18817
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: Hequn Cheng
>Priority: Major
>  Labels: test-stability
>
> The end-to-end test failed due to some AccessControlException:
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/5169/logs/125
> {code}
> 2020-08-04T13:13:10.2755424Z Failing this attempt.Diagnostics: Failed on 
> local exception: java.io.IOException: 
> org.apache.hadoop.security.AccessControlException: Client cannot authenticate 
> via:[TOKEN, KERBEROS]; Host Details : local host is: 
> "worker1.docker-hadoop-cluster-network/172.19.0.5"; destination host is: 
> "master.docker-hadoop-cluster-network":9000; 
> 2020-08-04T13:13:10.2757620Z java.io.IOException: Failed on local exception: 
> java.io.IOException: org.apache.hadoop.security.AccessControlException: 
> Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host 
> is: "worker1.docker-hadoop-cluster-network/172.19.0.5"; destination host is: 
> "master.docker-hadoop-cluster-network":9000; 
> 2020-08-04T13:13:10.2758550Z  at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:782)
> 2020-08-04T13:13:10.2758960Z  at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1493)
> 2020-08-04T13:13:10.2759321Z  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1435)
> 2020-08-04T13:13:10.2759676Z  at 
> org.apache.hadoop.ipc.Client.call(Client.java:1345)
> 2020-08-04T13:13:10.2760305Z  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
> 2020-08-04T13:13:10.2760743Z  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> 2020-08-04T13:13:10.2761087Z  at com.sun.proxy.$Proxy11.getFileInfo(Unknown 
> Source)
> 2020-08-04T13:13:10.2761521Z  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796)
> 2020-08-04T13:13:10.2761964Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-08-04T13:13:10.2762310Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-08-04T13:13:10.2762741Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-08-04T13:13:10.2763105Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-08-04T13:13:10.2763503Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
> 2020-08-04T13:13:10.2763979Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
> 2020-08-04T13:13:10.2764474Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
> 2020-08-04T13:13:10.2764944Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> 2020-08-04T13:13:10.2765417Z  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
> 2020-08-04T13:13:10.2765770Z  at com.sun.proxy.$Proxy12.getFileInfo(Unknown 
> Source)
> 2020-08-04T13:13:10.2766093Z  at 
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1649)
> 2020-08-04T13:13:10.2766489Z  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440)
> 2020-08-04T13:13:10.2767209Z  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
> 2020-08-04T13:13:10.2767699Z  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 2020-08-04T13:13:10.2768187Z  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
> 2020-08-04T13:13:10.2768646Z  at 
> org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
> 2020-08-04T13:13:10.2769051Z  at 
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
> 2020-08-04T13:13:10.2769470Z  at 
> org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
> 2020-08-04T13:13:10.2769988Z  at 
> org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
> 2020-08-04T13:13:10.2770438Z  at 
> java.security.AccessController.doPrivileged(Native Method)
> 2020-08-04T13:13:10.2770735Z  at 
> javax.security.auth.Subject.doAs(Subject.java:422)
> 2020-08-04T13:13:10.2771113Z  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840)
> 2020-08-04T13:13:10.2771503Z  at 
> 

[jira] [Commented] (FLINK-18845) class not found exception when i use sql client to try mysql as datasource.

2020-08-10 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175220#comment-17175220
 ] 

Robert Metzger commented on FLINK-18845:


I reduced the priority of this ticket until it is confirmed to be a blocker by 
a committer.

> class not found exception when i use sql client to try mysql as datasource.
> ---
>
> Key: FLINK-18845
> URL: https://issues.apache.org/jira/browse/FLINK-18845
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.11.0
> Environment: sql-client startup cmd as following:
> ./bin/sql-client.sh embedded
>Reporter: HuiyuZhou
>Priority: Minor
> Attachments: 
> flink-root-standalonesession-2-jqdev-l-01897.jqdev.shanghaigm.com.log
>
>
> Create table as following:
> USE CATALOG default_catalog;
> USE default_database;
> DROP TABLE IF EXISTS CarsOfFactory;
> CREATE TABLE CarsOfFactory (
>  TS STRING,
>  MANUFACTURE_PLANT STRING,
>  STAGE STRING,
>  CAR_NO BIGINT,
>  UPDATE_TIME TIMESTAMP,
>  PRIMARY KEY (TS,MANUFACTURE_PLANT,STAGE) NOT ENFORCED
> ) WITH (
>  'connector' = 'jdbc',
>  'url' = 'jdbc:mysql://',
>  'table-name' = 'CarsOfFactory',
>  'username' = '',
>  'password' = 'x'
> );
>  
> the sql client startup log as following:
> [^flink-root-standalonesession-2-jqdev-l-01897.jqdev.shanghaigm.com.log]
>  
> i also use arthas to check the class of JdbcRowDataInputFormat, it doesn't 
> exsit.
> [arthas@125257]$ getstatic 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat *
> No class found for: 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat
> Affect(row-cnt:0) cost in 29 ms.
>  
>  
> the detail error message in Apache Flink Dashboard as following:
> 2020-08-07 10:54:15
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load 
> user class: org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat
> ClassLoader info: URL ClassLoader:
> file: 
> '/tmp/blobStore-20e8ab84-2215-4f5f-b5bb-e7672a43fb43/job_aad9c3d36483cff4d20cc2aba399b8c0/blob_p-0b002ebba0e49cbf5ac62789e6b4fb299b5ae235-8fe568bdcd98caf8fb09c58092083ef4'
>  (valid JAR)
> Class not resolvable through given classloader.
> at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:288)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:126)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:453)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:61)
> at 
> org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:65)
> at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:48)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)
> at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at 

[jira] [Updated] (FLINK-18845) class not found exception when i use sql client to try mysql as datasource.

2020-08-10 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-18845:
---
Priority: Minor  (was: Blocker)

> class not found exception when i use sql client to try mysql as datasource.
> ---
>
> Key: FLINK-18845
> URL: https://issues.apache.org/jira/browse/FLINK-18845
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.11.0
> Environment: sql-client startup cmd as following:
> ./bin/sql-client.sh embedded
>Reporter: HuiyuZhou
>Priority: Minor
> Attachments: 
> flink-root-standalonesession-2-jqdev-l-01897.jqdev.shanghaigm.com.log
>
>
> Create table as following:
> USE CATALOG default_catalog;
> USE default_database;
> DROP TABLE IF EXISTS CarsOfFactory;
> CREATE TABLE CarsOfFactory (
>  TS STRING,
>  MANUFACTURE_PLANT STRING,
>  STAGE STRING,
>  CAR_NO BIGINT,
>  UPDATE_TIME TIMESTAMP,
>  PRIMARY KEY (TS,MANUFACTURE_PLANT,STAGE) NOT ENFORCED
> ) WITH (
>  'connector' = 'jdbc',
>  'url' = 'jdbc:mysql://',
>  'table-name' = 'CarsOfFactory',
>  'username' = '',
>  'password' = 'x'
> );
>  
> the sql client startup log as following:
> [^flink-root-standalonesession-2-jqdev-l-01897.jqdev.shanghaigm.com.log]
>  
> i also use arthas to check the class of JdbcRowDataInputFormat, it doesn't 
> exsit.
> [arthas@125257]$ getstatic 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat *
> No class found for: 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat
> Affect(row-cnt:0) cost in 29 ms.
>  
>  
> the detail error message in Apache Flink Dashboard as following:
> 2020-08-07 10:54:15
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load 
> user class: org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat
> ClassLoader info: URL ClassLoader:
> file: 
> '/tmp/blobStore-20e8ab84-2215-4f5f-b5bb-e7672a43fb43/job_aad9c3d36483cff4d20cc2aba399b8c0/blob_p-0b002ebba0e49cbf5ac62789e6b4fb299b5ae235-8fe568bdcd98caf8fb09c58092083ef4'
>  (valid JAR)
> Class not resolvable through given classloader.
> at 
> org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:288)
> at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.(OperatorChain.java:126)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:453)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.connector.jdbc.table.JdbcRowDataInputFormat
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:61)
> at 
> org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:65)
> at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:48)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)
> at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1868)
> at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
> at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
> at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
> at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
> at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)

[GitHub] [flink] flinkbot edited a comment on pull request #13051: [FLINK-18760][runtime] Redundant task managers should be released when there's no job running in session cluster

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13051:
URL: https://github.com/apache/flink/pull/13051#issuecomment-667970728


   
   ## CI report:
   
   * b387eae159be898230e9b5a410a295f9960dabb1 UNKNOWN
   * 0742547410ef77d61dd2566555e0540ff1558f4c UNKNOWN
   *  Unknown: [CANCELED](TBD) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13051: [FLINK-18760][runtime] Redundant task managers should be released when there's no job running in session cluster

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13051:
URL: https://github.com/apache/flink/pull/13051#issuecomment-667970728


   
   ## CI report:
   
   * b387eae159be898230e9b5a410a295f9960dabb1 UNKNOWN
   *  Unknown: [CANCELED](TBD) 
   * 0742547410ef77d61dd2566555e0540ff1558f4c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on pull request #13051: [FLINK-18760][runtime] Redundant task managers should be released when there's no job running in session cluster

2020-08-10 Thread GitBox


xintongsong commented on pull request #13051:
URL: https://github.com/apache/flink/pull/13051#issuecomment-671715738


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on a change in pull request #13051: [FLINK-18760][runtime] Redundant task managers should be released when there's no job running in session cluster

2020-08-10 Thread GitBox


xintongsong commented on a change in pull request #13051:
URL: https://github.com/apache/flink/pull/13051#discussion_r468315396



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/SlotManagerImpl.java
##
@@ -1256,7 +1256,10 @@ void checkTaskManagerTimeoutsAndRedundancy() {
}
 
int slotsDiff = redundantTaskManagerNum * 
numSlotsPerWorker - freeSlots.size();
-   if (slotsDiff > 0) {
+   if (freeSlots.size() == slots.size()) {

Review comment:
   If the job failover takes long, then all the idle task managers will 
timeout and be released. There's few benefit keeping the redundant task 
managers, since we need to wait for re-launching the other task managers 
anyway. (Assuming the job needs more task managers than the redundant ones to 
execute.)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #13101: [FLINK-18867][hive] Generic table stored in Hive catalog is incompati…

2020-08-10 Thread GitBox


JingsongLi commented on a change in pull request #13101:
URL: https://github.com/apache/flink/pull/13101#discussion_r468312971



##
File path: 
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/HiveCatalogGenericMetadataTest.java
##
@@ -65,6 +69,43 @@ public void testGenericTableSchema() throws Exception {
}
}
 
+   @Test

Review comment:
   Can add a comments: `NOTE: Do not modify this test, it is important to 
forward compatibility.`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #13101: [FLINK-18867][hive] Generic table stored in Hive catalog is incompati…

2020-08-10 Thread GitBox


JingsongLi commented on a change in pull request #13101:
URL: https://github.com/apache/flink/pull/13101#discussion_r468312570



##
File path: 
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/table/catalog/hive/HiveCatalogGenericMetadataTest.java
##
@@ -65,6 +69,43 @@ public void testGenericTableSchema() throws Exception {
}
}
 
+   @Test
+   public void testTableSchemaCompatibility() throws Exception {
+   catalog.createDatabase(db1, createDb(), false);
+   ObjectPath tablePath = new ObjectPath(db1, "generic110");
+
+   // create a table with old schema properties
+   Table hiveTable = 
org.apache.hadoop.hive.ql.metadata.Table.getEmptyTable(tablePath.getDatabaseName(),
+   tablePath.getObjectName());
+   // create table generic110 (c char(265), vc varchar(65536), ts 
timestamp(3), watermark for ts as ts)
+   hiveTable.setDbName(tablePath.getDatabaseName());
+   hiveTable.setTableName(tablePath.getObjectName());
+   hiveTable.getParameters().put(CatalogConfig.IS_GENERIC, "true");

Review comment:
   Can we add tests for all types and watermarks and computed columns?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18520) New Table Function type inference fails

2020-08-10 Thread Mulan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175204#comment-17175204
 ] 

Mulan commented on FLINK-18520:
---

Has AsyncTableFunction fixed?

> New Table Function type inference fails
> ---
>
> Key: FLINK-18520
> URL: https://issues.apache.org/jira/browse/FLINK-18520
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Benchao Li
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.1
>
>
> For a simple UDTF like 
> {code:java}
> public class Split extends TableFunction {
>   public Split(){}
>   public void eval(String str, String ch) {
>   if (str == null || str.isEmpty()) {
>   return;
>   } else {
>   String[] ss = str.split(ch);
>   for (String s : ss) {
>   collect(s);
>   }
>   }
>   }
>   }
> {code}
> register it using new function type inference 
> {{tableEnv.createFunction("my_split", Split.class);}} and using it in a 
> simple query will fail with following exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 93 to line 1, column 115: No match 
> found for function signature my_split(, )
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:716)
>   at com.bytedance.demo.SqlTest.main(SqlTest.java:64)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 93 to line 1, column 115: No match found for function signature 
> my_split(, )
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:457)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:839)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5089)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.handleUnresolvedFunction(SqlValidatorImpl.java:1882)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:305)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5858)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5845)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1800)
>   at 
> org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:57)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1110)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1084)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3256)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3238)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateJoin(SqlValidatorImpl.java:3303)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #13090: [FLINK-18844][json][maxwell] Support maxwell-json format to read Maxwell changelogs

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13090:
URL: https://github.com/apache/flink/pull/13090#issuecomment-670818966


   
   ## CI report:
   
   * 3faa1b539faff3a59454cf9710ea69f4dd395e05 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5305)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5308)
 
   * d2a7bde48d5c4e4106e9f889e0a3729c5a71c716 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5379)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13107: [FLINK-18874][python] Support conversion between Table and DataStream.

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13107:
URL: https://github.com/apache/flink/pull/13107#issuecomment-671348618


   
   ## CI report:
   
   * d5d59fc29f056d9f50c0bf13b34a699076a890a6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5362)
 
   * 76ffb2e9aa892213380aeb81db6ced81f6c2ab2a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5380)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13112: [BP-1.11][FLINK-18862][table-planner-blink] Fix LISTAGG throws BinaryRawValueData cannot be cast to StringData exception during runti

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13112:
URL: https://github.com/apache/flink/pull/13112#issuecomment-671705106


   
   ## CI report:
   
   * cea1a0f705453b6e5a03f8aade0cf4fc513fb608 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5381)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #13010: [FLINK-18735][table] Add support for more types to DataGen source

2020-08-10 Thread GitBox


JingsongLi commented on a change in pull request #13010:
URL: https://github.com/apache/flink/pull/13010#discussion_r468310256



##
File path: docs/dev/table/connectors/datagen.md
##
@@ -29,25 +29,24 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
-The DataGen connector allows for reading by data generation rules.
+The DataGen connector allows for creating tables based on in-memory data 
generation.
+This is useful when developing queries locally without access to external 
systems such as Kafka.
+Tables can include [Computed Column syntax]({% link dev/table/sql/create.md 
%}#create-table) which allows for flexible record generation.
 
-The DataGen connector can work with [Computed Column syntax]({% link 
dev/table/sql/create.md %}#create-table).
-This allows you to generate records flexibly.
+The DataGen connector is built-in, no additional dependencies are required.
 
-The DataGen connector is built-in.
+Usage
+-
 
-Attention Complex types are not 
supported: Array, Map, Row. Please construct these types by computed column.
+By default, a DataGen table will create an unbounded number of rows with a 
random value for each column.
+For variable sized types, char/varchar/string/array/map/multiset, the length 
can be specified.
+Additionally, a total number of rows can be specified, resulting in a bounded 
table.
 
-How to create a DataGen table
-
-
-The boundedness of table: when the generation of field data in the table is 
completed, the reading
-is finished. So the boundedness of the table depends on the boundedness of 
fields.
-
-For each field, there are two ways to generate data:
+There also exists a sequence generator, where users specify a sequence of 
start and end values.
+Complex types cannot be generated as a sequence.
+If any column in a table is a sequence type, the table will be bounded and end 
with the first sequence completes.
 
-- Random generator is the default generator, you can specify random max and 
min values. For char/varchar/string, the length can be specified. It is a 
unbounded generator.
-- Sequence generator, you can specify sequence start and end values. It is a 
bounded generator, when the sequence number reaches the end value, the reading 
ends.
+Time types are always the local machines current system time.

Review comment:
   Maybe we can have a table to show all types.
   Display the generation strategies they support, and the required parameters?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #13010: [FLINK-18735][table] Add support for more types to DataGen source

2020-08-10 Thread GitBox


JingsongLi commented on a change in pull request #13010:
URL: https://github.com/apache/flink/pull/13010#discussion_r468309695



##
File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/datagen/RandomGeneratorVisitor.java
##
@@ -0,0 +1,362 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.factories.datagen;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ConfigOptions;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.runtime.state.FunctionInitializationContext;
+import org.apache.flink.streaming.api.functions.source.datagen.DataGenerator;
+import org.apache.flink.streaming.api.functions.source.datagen.RandomGenerator;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.data.StringData;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.BooleanType;
+import org.apache.flink.table.types.logical.CharType;
+import org.apache.flink.table.types.logical.DayTimeIntervalType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.FloatType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.MapType;
+import org.apache.flink.table.types.logical.MultisetType;
+import org.apache.flink.table.types.logical.RowType;
+import org.apache.flink.table.types.logical.SmallIntType;
+import org.apache.flink.table.types.logical.TinyIntType;
+import org.apache.flink.table.types.logical.VarCharType;
+import org.apache.flink.table.types.logical.YearMonthIntervalType;
+import org.apache.flink.types.Row;
+
+import java.math.BigDecimal;
+import java.math.MathContext;
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.configuration.ConfigOptions.key;
+import static 
org.apache.flink.table.factories.DataGenTableSourceFactory.FIELDS;
+import static 
org.apache.flink.table.factories.DataGenTableSourceFactory.LENGTH;
+import static org.apache.flink.table.factories.DataGenTableSourceFactory.MAX;
+import static org.apache.flink.table.factories.DataGenTableSourceFactory.MIN;
+
+
+/**
+ * Creates a random {@link DataGeneratorContainer} for a particular logical 
type.
+ */
+@Internal
+@SuppressWarnings("unchecked")
+public class RandomGeneratorVisitor extends DataGenVisitorBase {
+
+   public static final int RANDOM_STRING_LENGTH_DEFAULT = 100;
+
+   private static final int RANDOM_COLLECTION_LENGTH_DEFAULT = 3;
+
+   private final ConfigOptions.OptionBuilder minKey;
+
+   private final ConfigOptions.OptionBuilder maxKey;
+
+   public RandomGeneratorVisitor(String name, ReadableConfig config) {
+   super(name, config);
+
+   this.minKey = key(FIELDS + "." + name + "." + MIN);
+   this.maxKey = key(FIELDS + "." + name + "." + MAX);
+   }
+
+   @Override
+   public DataGeneratorContainer visit(BooleanType booleanType) {
+   return 
DataGeneratorContainer.of(RandomGenerator.booleanGenerator());
+   }
+
+   @Override
+   public DataGeneratorContainer visit(CharType booleanType) {
+   ConfigOption lenOption = key(FIELDS + "." + name + "." 
+ LENGTH)
+   .intType()
+   .defaultValue(RANDOM_STRING_LENGTH_DEFAULT);
+   return 
DataGeneratorContainer.of(getRandomStringGenerator(config.get(lenOption)), 
lenOption);
+   }
+
+   @Override
+   public DataGeneratorContainer visit(VarCharType booleanType) {
+   ConfigOption lenOption = key(FIELDS + "." + name + "." 
+ LENGTH)
+   .intType()
+   .defaultValue(RANDOM_STRING_LENGTH_DEFAULT);
+   return 

[GitHub] [flink] JingsongLi commented on a change in pull request #13010: [FLINK-18735][table] Add support for more types to DataGen source

2020-08-10 Thread GitBox


JingsongLi commented on a change in pull request #13010:
URL: https://github.com/apache/flink/pull/13010#discussion_r468305915



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/datagen/DataGeneratorSource.java
##
@@ -77,8 +102,11 @@ public void run(SourceContext ctx) throws Exception {
 
while (isRunning) {
for (int i = 0; i < taskRowsPerSecond; i++) {
-   if (isRunning && generator.hasNext()) {
+   if (isRunning && generator.hasNext() && 
(numberOfRows == null || outputSoFar < toOutput)) {
synchronized (ctx.getCheckpointLock()) {
+   if (numberOfRows != null) {
+   outputSoFar++;

Review comment:
   I have an idea: print `outputSoFar` in `close`. It is good for debugging.

##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/source/datagen/DataGeneratorSource.java
##
@@ -77,8 +102,11 @@ public void run(SourceContext ctx) throws Exception {
 
while (isRunning) {
for (int i = 0; i < taskRowsPerSecond; i++) {
-   if (isRunning && generator.hasNext()) {
+   if (isRunning && generator.hasNext() && 
(numberOfRows == null || outputSoFar < toOutput)) {
synchronized (ctx.getCheckpointLock()) {
+   if (numberOfRows != null) {

Review comment:
   Remove this if?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17511) "RocksDB Memory Management end-to-end test" fails with "Current block cache usage 202123272 larger than expected memory limit 200000000"

2020-08-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175198#comment-17175198
 ] 

Dian Fu commented on FLINK-17511:
-

Another instance: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5376=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529]

> "RocksDB Memory Management end-to-end test" fails with "Current block cache 
> usage 202123272 larger than expected memory limit 2"
> 
>
> Key: FLINK-17511
> URL: https://issues.apache.org/jira/browse/FLINK-17511
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends, Tests
>Affects Versions: 1.10.0, 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> CI: https://travis-ci.org/github/apache/flink/jobs/683049297
> {code}
> Waiting for job to process up to 1 records, current progress: 9751 
> records ...
> Waiting for job to process up to 1 records, current progress: 9852 
> records ...
> Waiting for job to process up to 1 records, current progress: 9852 
> records ...
> Waiting for job to process up to 1 records, current progress: 9926 
> records ...
> Cancelling job d5a8e2c5a382b51573934c91b7b7ac9b.
> Cancelled job d5a8e2c5a382b51573934c91b7b7ac9b.
> [INFO] Current block cache usage for RocksDB instance in slot was 152229488
> [INFO] Current block cache usage for RocksDB instance in slot was 202123272
> [ERROR] Current block cache usage 202123272 larger than expected memory limit 
> 2
> [FAIL] Test script contains errors.
> Checking for errors...
> No errors in log files.
> Checking for exceptions...
> No exceptions in log files.
> Checking for non-empty .out files...
> No non-empty .out files.
> [FAIL] 'RocksDB Memory Management end-to-end test' failed after 8 minutes and 
> 20 seconds! Test exited with exit code 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18885) Add partitioning interfaces for Python DataStream API.

2020-08-10 Thread Shuiqiang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuiqiang Chen updated FLINK-18885:
---
Summary: Add partitioning interfaces for Python DataStream API.  (was: Add 
partitioning interfaces for Python DataStream.)

> Add partitioning interfaces for Python DataStream API.
> --
>
> Key: FLINK-18885
> URL: https://issues.apache.org/jira/browse/FLINK-18885
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Priority: Major
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13112: [BP-1.11][FLINK-18862][table-planner-blink] Fix LISTAGG throws BinaryRawValueData cannot be cast to StringData exception during runtime

2020-08-10 Thread GitBox


flinkbot commented on pull request #13112:
URL: https://github.com/apache/flink/pull/13112#issuecomment-671705106


   
   ## CI report:
   
   * cea1a0f705453b6e5a03f8aade0cf4fc513fb608 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18888) Support execute_async for StreamExecutionEnvironment.

2020-08-10 Thread Shuiqiang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuiqiang Chen updated FLINK-1:
---
Summary: Support execute_async for StreamExecutionEnvironment.  (was: Add 
ElasticSearch connector for Python DataStream API)

> Support execute_async for StreamExecutionEnvironment.
> -
>
> Key: FLINK-1
> URL: https://issues.apache.org/jira/browse/FLINK-1
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Priority: Major
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13107: [FLINK-18874][python] Support conversion between Table and DataStream.

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13107:
URL: https://github.com/apache/flink/pull/13107#issuecomment-671348618


   
   ## CI report:
   
   * d5d59fc29f056d9f50c0bf13b34a699076a890a6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5362)
 
   * 76ffb2e9aa892213380aeb81db6ced81f6c2ab2a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18887) Add ElasticSearch connector for Python DataStream API

2020-08-10 Thread Shuiqiang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuiqiang Chen updated FLINK-18887:
---
Fix Version/s: 1.12.0

> Add ElasticSearch connector for Python DataStream API
> -
>
> Key: FLINK-18887
> URL: https://issues.apache.org/jira/browse/FLINK-18887
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Priority: Major
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18888) Add ElasticSearch connector for Python DataStream API

2020-08-10 Thread Shuiqiang Chen (Jira)
Shuiqiang Chen created FLINK-1:
--

 Summary: Add ElasticSearch connector for Python DataStream API
 Key: FLINK-1
 URL: https://issues.apache.org/jira/browse/FLINK-1
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Shuiqiang Chen
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13090: [FLINK-18844][json][maxwell] Support maxwell-json format to read Maxwell changelogs

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13090:
URL: https://github.com/apache/flink/pull/13090#issuecomment-670818966


   
   ## CI report:
   
   * 3faa1b539faff3a59454cf9710ea69f4dd395e05 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5305)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5308)
 
   * d2a7bde48d5c4e4106e9f889e0a3729c5a71c716 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18887) Add ElasticSearch connector for Python DataStream API

2020-08-10 Thread Shuiqiang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuiqiang Chen updated FLINK-18887:
---
Component/s: API / Python

> Add ElasticSearch connector for Python DataStream API
> -
>
> Key: FLINK-18887
> URL: https://issues.apache.org/jira/browse/FLINK-18887
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18887) Add ElasticSearch connector for Python DataStream API

2020-08-10 Thread Shuiqiang Chen (Jira)
Shuiqiang Chen created FLINK-18887:
--

 Summary: Add ElasticSearch connector for Python DataStream API
 Key: FLINK-18887
 URL: https://issues.apache.org/jira/browse/FLINK-18887
 Project: Flink
  Issue Type: Sub-task
Reporter: Shuiqiang Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18886) Support Kafka connectors for Python DataStream API

2020-08-10 Thread Shuiqiang Chen (Jira)
Shuiqiang Chen created FLINK-18886:
--

 Summary: Support Kafka connectors for Python DataStream API
 Key: FLINK-18886
 URL: https://issues.apache.org/jira/browse/FLINK-18886
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Shuiqiang Chen
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18885) Add partitioning interfaces for Python DataStream.

2020-08-10 Thread Shuiqiang Chen (Jira)
Shuiqiang Chen created FLINK-18885:
--

 Summary: Add partitioning interfaces for Python DataStream.
 Key: FLINK-18885
 URL: https://issues.apache.org/jira/browse/FLINK-18885
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Shuiqiang Chen
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18884) Add chaining strategy and slot sharing group interfaces for Python DataStream API

2020-08-10 Thread Shuiqiang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shuiqiang Chen updated FLINK-18884:
---
Summary: Add chaining strategy and slot sharing group interfaces for Python 
DataStream API  (was: Add chaining strategy and slot sharing group interfaces 
for DataStream)

> Add chaining strategy and slot sharing group interfaces for Python DataStream 
> API
> -
>
> Key: FLINK-18884
> URL: https://issues.apache.org/jira/browse/FLINK-18884
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Priority: Major
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18884) Add chaining strategy and slot sharing group interfaces for DataStream

2020-08-10 Thread Shuiqiang Chen (Jira)
Shuiqiang Chen created FLINK-18884:
--

 Summary: Add chaining strategy and slot sharing group interfaces 
for DataStream
 Key: FLINK-18884
 URL: https://issues.apache.org/jira/browse/FLINK-18884
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Shuiqiang Chen
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18883) Support reduce() operation for Python KeyedStream.

2020-08-10 Thread Hequn Cheng (Jira)
Hequn Cheng created FLINK-18883:
---

 Summary: Support reduce() operation for Python KeyedStream.
 Key: FLINK-18883
 URL: https://issues.apache.org/jira/browse/FLINK-18883
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Hequn Cheng
Assignee: Hequn Cheng
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] shuiqiangchen commented on pull request #13107: [FLINK-18874][python] Support conversion between Table and DataStream.

2020-08-10 Thread GitBox


shuiqiangchen commented on pull request #13107:
URL: https://github.com/apache/flink/pull/13107#issuecomment-671703560


   @hequn8128  Thank you! I have added type hints for APIs in the latest 
commit, please have a look. And I think we could add examples for these apis in 
python document later.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13112: [BP-1.11][FLINK-18862][table-planner-blink] Fix LISTAGG throws BinaryRawValueData cannot be cast to StringData exception during runtime

2020-08-10 Thread GitBox


flinkbot commented on pull request #13112:
URL: https://github.com/apache/flink/pull/13112#issuecomment-671702289


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cea1a0f705453b6e5a03f8aade0cf4fc513fb608 (Tue Aug 11 
03:25:09 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18862) Fix LISTAGG throws BinaryRawValueData cannot be cast to StringData exception in runtime

2020-08-10 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175195#comment-17175195
 ] 

Jark Wu commented on FLINK-18862:
-

Fixed in 
 - master (1.12.0): bfbdca9f574b4201ba7a400607c5169134ecf0a2
 - 1.11.2: TODO

> Fix LISTAGG throws BinaryRawValueData cannot be cast to StringData exception 
> in runtime
> ---
>
> Key: FLINK-18862
> URL: https://issues.apache.org/jira/browse/FLINK-18862
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: YUJIANBO
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.2
>
> Attachments: sql 中agg算子的 count(1) 的结果强转成String类型再经过一次agg的操作就不能成功.txt
>
>
> 1. Env:flinksql、 version 1.11.1,perjob mode
> 2. Error:org.apache.flink.table.data.binary.BinaryRawValueData cannot be cast 
> to org.apache.flink.table.data.StringData
> 3、Job:
> (1) create a kafka table
> {code:java}
> CREATE TABLE kafka(
> x String,
> y String
> )with(
>'connector' = 'kafka',
> ..
> )
> {code}
> (2)create a view:
> {code:java}
>CREATE VIEW view1 AS
>SELECT 
>x, 
>y, 
>CAST(COUNT(1) AS VARCHAR) AS ct
>FROM kafka
>GROUP BY 
>x, y
> {code}
> (3) aggregate on the view:
> {code:java}
> select 
>  x, 
>  LISTAGG(CONCAT_WS('=', y, ct), ',') AS lists
> FROM view1
> GROUP BY x
> {code}
> And then the exception is 
> thrown:org.apache.flink.table.data.binary.BinaryRawValueData cannot be cast 
> to org.apache.flink.table.data.StringData
> The problem is that, there is no RawValueData in the query. The result type 
> of count(1) should be bigint, not RawValueData. 
>
> (4) If there is no aggregation, the job can run succefully.
> {code:java}
> select 
>   x, 
>   CONCAT_WS('=', y, ct)
> from view1
> {code}
> The detailed exception:
> {code:java}
> java.lang.ClassCastException: 
> org.apache.flink.table.data.binary.BinaryRawValueData cannot be cast to 
> org.apache.flink.table.data.StringData
>   at 
> org.apache.flink.table.data.GenericRowData.getString(GenericRowData.java:169) 
> ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.table.data.JoinedRowData.getString(JoinedRowData.java:139) 
> ~[flink-table-blink_2.11-1.11.1.jar:?]
>   at org.apache.flink.table.data.RowData.get(RowData.java:273) 
> ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:156)
>  ~[flink-table-blink_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:123)
>  ~[flink-table-blink_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
>  ~[flink-table-blink_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:715)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:53)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:205)
>  ~[flink-table-blink_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>  ~[flink-table-blink_2.11-1.11.1.jar:1.11.1]
>   at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:161)
>  ~[ad_features_auto-1.0-SNAPSHOT.jar:?]
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:178)
>  

[GitHub] [flink] wuchong opened a new pull request #13112: [BP-1.11][FLINK-18862][table-planner-blink] Fix LISTAGG throws BinaryRawValueData cannot be cast to StringData exception during runtime

2020-08-10 Thread GitBox


wuchong opened a new pull request #13112:
URL: https://github.com/apache/flink/pull/13112


   This is a backport to release-1.11 branch. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18872) Aggregate with mini-batch does not respect state retention

2020-08-10 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li closed FLINK-18872.
--
Resolution: Duplicate

> Aggregate with mini-batch does not respect state retention
> --
>
> Key: FLINK-18872
> URL: https://issues.apache.org/jira/browse/FLINK-18872
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Runtime
>Affects Versions: 1.10.2, 1.12.0, 1.11.1
>Reporter: Benchao Li
>Priority: Major
>
> MiniBatchGroupAggFunction and MiniBatchGlobalGroupAggFunction does not 
> respect state retention config, the state will grow infinitely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18882) Investigate supporting Hive interval type

2020-08-10 Thread Rui Li (Jira)
Rui Li created FLINK-18882:
--

 Summary: Investigate supporting Hive interval type
 Key: FLINK-18882
 URL: https://issues.apache.org/jira/browse/FLINK-18882
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: Rui Li


Some UDF returns interval results, and such UDF cannot be called via Flink 
because interval type is not supported at the moment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18738) Revisit resource management model for python processes.

2020-08-10 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175189#comment-17175189
 ] 

Xintong Song commented on FLINK-18738:
--

Hi all,

[~dianfu] and I had an offline discussion regarding the process model and 
resource management for python UDFs. Here are our outcomes and some open 
questions. We would like to collect some feedbacks on the general direction 
before diving into the design details.

h3. Process Model

Before discussing the memory management, it would be better to first get 
consensus on the long term process model for python UDFs. There are several 
options from our offline discussion and the previous discussions in FLINK-17923.
# *One python process per python operator.* This is the current approach. The 
operator is responsible for launching and terminating the python processes.
# *One python process per slot.* TaskManager is responsible for launching the 
python processes. A python process will be launched when the slot is created 
(allocated), and terminated when the slot is destroyed (freed).
# *One python process per TaskManager.* The deployment framework is responsible 
for launching the python processes. Then the python operators (in the java 
process) deploy the workload to the python processes.

Among the 3 options above, *Dian and I are in favor of option 2).*

*Problems for option 1)*
Low efficiency. In case of multiple python operators in one slot, launching one 
python process per operator will introduce significant overhead (framework, 
python VM, inter-process communication). In scenarios where the operators 
themselves do not take much resources, the problem become severer because the 
overhead takes more proportion in the overall resource consumption.

*Problems for option 3)*
Dependency conflict. Python operators from different jobs might be deployed 
into the same TaskManager. These operators may need to load different 
dependencies. If they are executed in the same python process, there could be 
dependency conflicts.

_Open questions_
* According to Dian’s input, python does not provide mechanism for dependency 
isolation (like class loaders in java). We need to double check on this.
* How do we handle potential conflicts between the framework and user code 
dependencies?

*Benefits for options 2)*
* Operators in the same slot would be able to share the python process. This 
should help reduce the overheads.
* A slot cannot be shared by multiple jobs, thus no need to worry about 
cross-job dependency conflicts.

h3. Memory Management

The discussion here is based on the assumption that we choose option 2) for the 
process model, which is still discussable.

Since python processes are dynamically launched and terminated, as slots 
created and destroyed, we would need the TaskManager rather than the deployment 
framework to managed the resources of python processes. Two potential 
approaches are discussed.

# *Make python processes use managed memory.* We would need a proper way to 
share managed memory between python processes and rocksdb state backend in 
streaming scenarios.
# *Introduce a new `python memory` to the TaskManager memory model for python 
processes.* The new python memory should adding to the overall pod/container 
memory, either aside from or as a part of TaskManager's total process memory.

*Dian and I prefer option 2),* for the following reasons.
* For option 1), it would be complicated to decide how to share managed memory 
before python and rocksdb. E.g., if user wants to give more memory to rocksdb 
while not changing the memory for python, he would need to not only increase 
the managed memory size, but also carefully tune how managed memory is shared 
(e.g., a fraction).
* According to Dian's input, it is preferred to configure absolute size of 
memory for python UDFs, rather than a fraction of the total memory. Managed 
memory consumers (batch operators and rocksdb) have a common characteristic 
that they can to same extend adapt to the given memory. The more memory, the 
better performance. On the other hand, resource requirements of python UDFs are 
more inflexible. The process fails if it needs more memory than the specified 
limit, and does not benefit from a larger-than-needed limit.

h3. Developing Plan

Assuming we decide to go along the proposed approaches
* process model option 2), and
* memory management option 2)

It would be good to separate these changes into two separated efforts. Trying 
to accomplish both efforts in 1.12 seems aggressive and we would like to avoid 
such rushing. Among the two efforts, the memory management changing is more 
user-faced. If we decide to change memory configurations for python UDFs, we'd 
better to do that early. Therefore, a potential feasible plan could be try to 
finish the memory management effort in 1.12, and postpone the process model 
changes to the next release.

_Open question_
* 

[jira] [Commented] (FLINK-18872) Aggregate with mini-batch does not respect state retention

2020-08-10 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175185#comment-17175185
 ] 

Benchao Li commented on FLINK-18872:


[~lsy] sorry for the duplication, I'll close this issue.

> Aggregate with mini-batch does not respect state retention
> --
>
> Key: FLINK-18872
> URL: https://issues.apache.org/jira/browse/FLINK-18872
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Runtime
>Affects Versions: 1.10.2, 1.12.0, 1.11.1
>Reporter: Benchao Li
>Priority: Major
>
> MiniBatchGroupAggFunction and MiniBatchGlobalGroupAggFunction does not 
> respect state retention config, the state will grow infinitely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13098: [FLINK-18866][python] Support filter() operation for Python DataStrea…

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13098:
URL: https://github.com/apache/flink/pull/13098#issuecomment-671163278


   
   ## CI report:
   
   * 2577a9c0cca713f583f38d593c8803bc6d5506c8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5357)
 
   * af3d92942065464e267e9715696ec3154ed32ee9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5378)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18872) Aggregate with mini-batch does not respect state retention

2020-08-10 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175175#comment-17175175
 ] 

dalongliu commented on FLINK-18872:
---

[~libenchao], I have open a Jira issue for this problem months ago [Minibatch 
Group Agg support state ttl|https://issues.apache.org/jira/browse/FLINK-17096]

> Aggregate with mini-batch does not respect state retention
> --
>
> Key: FLINK-18872
> URL: https://issues.apache.org/jira/browse/FLINK-18872
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Runtime
>Affects Versions: 1.10.2, 1.12.0, 1.11.1
>Reporter: Benchao Li
>Priority: Major
>
> MiniBatchGroupAggFunction and MiniBatchGlobalGroupAggFunction does not 
> respect state retention config, the state will grow infinitely.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13098: [FLINK-18866][python] Support filter() operation for Python DataStrea…

2020-08-10 Thread GitBox


flinkbot edited a comment on pull request #13098:
URL: https://github.com/apache/flink/pull/13098#issuecomment-671163278


   
   ## CI report:
   
   * 2577a9c0cca713f583f38d593c8803bc6d5506c8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=5357)
 
   * af3d92942065464e267e9715696ec3154ed32ee9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17949) KafkaShuffleITCase.testSerDeIngestionTime:156->testRecordSerDe:388 expected:<310> but was:<0>

2020-08-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175166#comment-17175166
 ] 

Dian Fu commented on FLINK-17949:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5372=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b]

{code}
KafkaShuffleITCase.testSerDeEventTime:166->testRecordSerDe:388 expected:<882> 
but was:<0>
{code}

> KafkaShuffleITCase.testSerDeIngestionTime:156->testRecordSerDe:388 
> expected:<310> but was:<0>
> -
>
> Key: FLINK-17949
> URL: https://issues.apache.org/jira/browse/FLINK-17949
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Attachments: logs-ci-kafkagelly-1590500380.zip, 
> logs-ci-kafkagelly-1590524911.zip
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2209=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-05-26T13:35:19.4022562Z [ERROR] 
> testSerDeIngestionTime(org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase)
>   Time elapsed: 5.786 s  <<< FAILURE!
> 2020-05-26T13:35:19.4023185Z java.lang.AssertionError: expected:<310> but 
> was:<0>
> 2020-05-26T13:35:19.4023498Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-05-26T13:35:19.4023825Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-05-26T13:35:19.4024461Z  at 
> org.junit.Assert.assertEquals(Assert.java:645)
> 2020-05-26T13:35:19.4024900Z  at 
> org.junit.Assert.assertEquals(Assert.java:631)
> 2020-05-26T13:35:19.4028546Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testRecordSerDe(KafkaShuffleITCase.java:388)
> 2020-05-26T13:35:19.4029629Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testSerDeIngestionTime(KafkaShuffleITCase.java:156)
> 2020-05-26T13:35:19.4030253Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-26T13:35:19.4030673Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-26T13:35:19.4031332Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-26T13:35:19.4031763Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-26T13:35:19.4032155Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-26T13:35:19.4032630Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-26T13:35:19.4033188Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-26T13:35:19.4033638Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-26T13:35:19.4034103Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-26T13:35:19.4034593Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-26T13:35:19.4035118Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-26T13:35:19.4035570Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-26T13:35:19.4035888Z  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18797) docs and examples use deprecated forms of keyBy

2020-08-10 Thread wulei0302 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175164#comment-17175164
 ] 

wulei0302 commented on FLINK-18797:
---

Hi [~jark], Do you think this needs to be done?

I am willing to do it if needed,

for example 'keyBy(0)' -> 'keyBy(value -> value.f0)'

> docs and examples use deprecated forms of keyBy
> ---
>
> Key: FLINK-18797
> URL: https://issues.apache.org/jira/browse/FLINK-18797
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Examples
>Affects Versions: 1.11.0, 1.11.1
>Reporter: David Anderson
>Priority: Major
>
> The DataStream example at 
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/datastream_api.html#example-program
>  uses
> {{keyBy(0)}}
> which has been deprecated. There are many other cases of this throughout the 
> docs:
> dev/connectors/cassandra.md
> dev/parallel.md
> dev/stream/operators/index.md
> dev/stream/operators/process_function.md
> dev/stream/state/queryable_state.md
> dev/stream/state/state.md
> dev/types_serialization.md
> learn-flink/etl.md
> ops/scala_shell.md
> and also in a number of examples:
> AsyncIOExample.java
> SideOutputExample.java
> TwitterExample.java
> GroupedProcessingTimeWindowExample.java
> SessionWindowing.java
> TopSpeedWindowing.java
> WindowWordCount.java
> WordCount.java
> TwitterExample.scala
> GroupedProcessingTimeWindowExample.scala
> SessionWindowing.scala
> WindowWordCount.scala
> WordCount.scala
> There are also some uses of keyBy("string"), which has also been deprecated:
> dev/connectors/cassandra.md
> dev/stream/operators/index.md
> dev/types_serialization.md
> learn-flink/etl.md
> SocketWindowWordCount.java
> SocketWindowWordCount.scala
> TopSpeedWindowing.scala



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18881) Modify the Access Broken Link

2020-08-10 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18881:

Fix Version/s: (was: 1.11.1)
   (was: 1.11.0)
   1.11.2

> Modify the Access Broken Link
> -
>
> Key: FLINK-18881
> URL: https://issues.apache.org/jira/browse/FLINK-18881
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.11.0, 1.11.1
>Reporter: weizheng
>Priority: Major
> Fix For: 1.11.2
>
>
> In 
> Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]
> the link of [Rides and Fares 
> Exercise|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
>  is not accessible



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18881) Modify the Access Broken Link

2020-08-10 Thread weizheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizheng updated FLINK-18881:
-
Description: 
In 
Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]

The link of [Rides and Fares 
Exercise|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
 is not accessible.

  was:
In 
Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]

the link of [Rides and Fares 
Exercise|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
 is not accessible


> Modify the Access Broken Link
> -
>
> Key: FLINK-18881
> URL: https://issues.apache.org/jira/browse/FLINK-18881
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.11.0, 1.11.1
>Reporter: weizheng
>Priority: Major
> Fix For: 1.11.2
>
>
> In 
> Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]
> The link of [Rides and Fares 
> Exercise|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
>  is not accessible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18881) Modify the Access Broken Link

2020-08-10 Thread weizheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizheng updated FLINK-18881:
-
Description: 
In 
Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]

the link of [Rides and Fares 
Exercise|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
 is not accessible

  was:
In 
Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]

the link of Rides and Fares 
Exercise[链接标题|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
 is not accessible


> Modify the Access Broken Link
> -
>
> Key: FLINK-18881
> URL: https://issues.apache.org/jira/browse/FLINK-18881
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.11.0, 1.11.1
>Reporter: weizheng
>Priority: Major
> Fix For: 1.11.0, 1.11.1
>
>
> In 
> Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]
> the link of [Rides and Fares 
> Exercise|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
>  is not accessible



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18881) Modify the Access Broken Link

2020-08-10 Thread weizheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizheng updated FLINK-18881:
-
Environment: (was: In 
Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]

the link of Rides and Fares 
Exercise[链接标题|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
 is not accessible)

> Modify the Access Broken Link
> -
>
> Key: FLINK-18881
> URL: https://issues.apache.org/jira/browse/FLINK-18881
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.11.0, 1.11.1
>Reporter: weizheng
>Priority: Major
> Fix For: 1.11.0, 1.11.1
>
>
> https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18881) Modify the Access Broken Link

2020-08-10 Thread weizheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizheng updated FLINK-18881:
-
Description: 
In 
Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]

the link of Rides and Fares 
Exercise[链接标题|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
 is not accessible

  was:https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares


> Modify the Access Broken Link
> -
>
> Key: FLINK-18881
> URL: https://issues.apache.org/jira/browse/FLINK-18881
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.11.0, 1.11.1
>Reporter: weizheng
>Priority: Major
> Fix For: 1.11.0, 1.11.1
>
>
> In 
> Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]
> the link of Rides and Fares 
> Exercise[链接标题|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
>  is not accessible



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18881) Modify the Access Broken Link

2020-08-10 Thread weizheng (Jira)
weizheng created FLINK-18881:


 Summary: Modify the Access Broken Link
 Key: FLINK-18881
 URL: https://issues.apache.org/jira/browse/FLINK-18881
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.11.1, 1.11.0
 Environment: In 
Page:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/learn-flink/etl.html]

the link of Rides and Fares 
Exercise[链接标题|[https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares]]
 is not accessible
Reporter: weizheng
 Fix For: 1.11.1, 1.11.0


https://github.com/apache/flink-training/tree/release-1.11/rides-and-fares



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18333) UnsignedTypeConversionITCase failed caused by MariaDB4j "Asked to waitFor Program"

2020-08-10 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18333:

Priority: Critical  (was: Major)

> UnsignedTypeConversionITCase failed caused by MariaDB4j "Asked to waitFor 
> Program"
> --
>
> Key: FLINK-18333
> URL: https://issues.apache.org/jira/browse/FLINK-18333
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Jark Wu
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=8173=logs=66592496-52df-56bb-d03e-37509e1d9d0f=ae0269db-6796-5583-2e5f-d84757d711aa
> {code}
> 2020-06-16T08:23:26.3013987Z [INFO] Running 
> org.apache.flink.connector.jdbc.table.JdbcDynamicTableSourceITCase
> 2020-06-16T08:23:30.2252334Z Tue Jun 16 08:23:30 UTC 2020 Thread[main,5,main] 
> java.lang.NoSuchFieldException: DEV_NULL
> 2020-06-16T08:23:31.2907920Z 
> 
> 2020-06-16T08:23:31.2913806Z Tue Jun 16 08:23:30 UTC 2020:
> 2020-06-16T08:23:31.2914839Z Booting Derby version The Apache Software 
> Foundation - Apache Derby - 10.14.2.0 - (1828579): instance 
> a816c00e-0172-bc39-e4b1-0e4ce818 
> 2020-06-16T08:23:31.2915845Z on database directory 
> memory:/__w/1/s/flink-connectors/flink-connector-jdbc/target/test with class 
> loader sun.misc.Launcher$AppClassLoader@677327b6 
> 2020-06-16T08:23:31.2916637Z Loaded from 
> file:/__w/1/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
> 2020-06-16T08:23:31.2916968Z java.vendor=Private Build
> 2020-06-16T08:23:31.2917461Z 
> java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
> 2020-06-16T08:23:31.2922200Z 
> user.dir=/__w/1/s/flink-connectors/flink-connector-jdbc/target
> 2020-06-16T08:23:31.2922516Z os.name=Linux
> 2020-06-16T08:23:31.2922709Z os.arch=amd64
> 2020-06-16T08:23:31.2923086Z os.version=4.15.0-1083-azure
> 2020-06-16T08:23:31.2923316Z derby.system.home=null
> 2020-06-16T08:23:31.2923616Z 
> derby.stream.error.field=org.apache.flink.connector.jdbc.JdbcTestBase.DEV_NULL
> 2020-06-16T08:23:31.2924790Z Database Class Loader started - 
> derby.database.classpath=''
> 2020-06-16T08:23:37.4354243Z [INFO] Tests run: 2, Failures: 0, Errors: 0, 
> Skipped: 0, Time elapsed: 11.133 s - in 
> org.apache.flink.connector.jdbc.table.JdbcDynamicTableSourceITCase
> 2020-06-16T08:23:38.1880075Z [INFO] Running 
> org.apache.flink.connector.jdbc.table.JdbcTableSourceITCase
> 2020-06-16T08:23:41.3718038Z Tue Jun 16 08:23:41 UTC 2020 Thread[main,5,main] 
> java.lang.NoSuchFieldException: DEV_NULL
> 2020-06-16T08:23:41.4383244Z 
> 
> 2020-06-16T08:23:41.4401761Z Tue Jun 16 08:23:41 UTC 2020:
> 2020-06-16T08:23:41.4402797Z Booting Derby version The Apache Software 
> Foundation - Apache Derby - 10.14.2.0 - (1828579): instance 
> a816c00e-0172-bc3a-103b-0e4b0610 
> 2020-06-16T08:23:41.4403758Z on database directory 
> memory:/__w/1/s/flink-connectors/flink-connector-jdbc/target/test with class 
> loader sun.misc.Launcher$AppClassLoader@677327b6 
> 2020-06-16T08:23:41.4404581Z Loaded from 
> file:/__w/1/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
> 2020-06-16T08:23:41.4404945Z java.vendor=Private Build
> 2020-06-16T08:23:41.4405497Z 
> java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
> 2020-06-16T08:23:41.4406048Z 
> user.dir=/__w/1/s/flink-connectors/flink-connector-jdbc/target
> 2020-06-16T08:23:41.4406303Z os.name=Linux
> 2020-06-16T08:23:41.4406494Z os.arch=amd64
> 2020-06-16T08:23:41.4406878Z os.version=4.15.0-1083-azure
> 2020-06-16T08:23:41.4407097Z derby.system.home=null
> 2020-06-16T08:23:41.4407415Z 
> derby.stream.error.field=org.apache.flink.connector.jdbc.JdbcTestBase.DEV_NULL
> 2020-06-16T08:23:41.5287219Z Database Class Loader started - 
> derby.database.classpath=''
> 2020-06-16T08:23:46.4567063Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 23.729 s <<< FAILURE! - in 
> org.apache.flink.connector.jdbc.table.UnsignedTypeConversionITCase
> 2020-06-16T08:23:46.4575785Z [ERROR] 
> org.apache.flink.connector.jdbc.table.UnsignedTypeConversionITCase  Time 
> elapsed: 23.729 s  <<< ERROR!
> 2020-06-16T08:23:46.4576490Z ch.vorburger.exec.ManagedProcessException: An 
> error occurred while running a command: create database if not exists `test`;
> 2020-06-16T08:23:46.4577193Z  at ch.vorburger.mariadb4j.DB.run(DB.java:300)
> 2020-06-16T08:23:46.4577537Z  at ch.vorburger.mariadb4j.DB.run(DB.java:265)
> 2020-06-16T08:23:46.4577861Z  at ch.vorburger.mariadb4j.DB.run(DB.java:269)
> 2020-06-16T08:23:46.4578212Z  at 
> 

[jira] [Commented] (FLINK-18333) UnsignedTypeConversionITCase failed caused by MariaDB4j "Asked to waitFor Program"

2020-08-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175161#comment-17175161
 ] 

Dian Fu commented on FLINK-18333:
-

Another instance: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5372=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407]

> UnsignedTypeConversionITCase failed caused by MariaDB4j "Asked to waitFor 
> Program"
> --
>
> Key: FLINK-18333
> URL: https://issues.apache.org/jira/browse/FLINK-18333
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Jark Wu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=8173=logs=66592496-52df-56bb-d03e-37509e1d9d0f=ae0269db-6796-5583-2e5f-d84757d711aa
> {code}
> 2020-06-16T08:23:26.3013987Z [INFO] Running 
> org.apache.flink.connector.jdbc.table.JdbcDynamicTableSourceITCase
> 2020-06-16T08:23:30.2252334Z Tue Jun 16 08:23:30 UTC 2020 Thread[main,5,main] 
> java.lang.NoSuchFieldException: DEV_NULL
> 2020-06-16T08:23:31.2907920Z 
> 
> 2020-06-16T08:23:31.2913806Z Tue Jun 16 08:23:30 UTC 2020:
> 2020-06-16T08:23:31.2914839Z Booting Derby version The Apache Software 
> Foundation - Apache Derby - 10.14.2.0 - (1828579): instance 
> a816c00e-0172-bc39-e4b1-0e4ce818 
> 2020-06-16T08:23:31.2915845Z on database directory 
> memory:/__w/1/s/flink-connectors/flink-connector-jdbc/target/test with class 
> loader sun.misc.Launcher$AppClassLoader@677327b6 
> 2020-06-16T08:23:31.2916637Z Loaded from 
> file:/__w/1/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
> 2020-06-16T08:23:31.2916968Z java.vendor=Private Build
> 2020-06-16T08:23:31.2917461Z 
> java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
> 2020-06-16T08:23:31.2922200Z 
> user.dir=/__w/1/s/flink-connectors/flink-connector-jdbc/target
> 2020-06-16T08:23:31.2922516Z os.name=Linux
> 2020-06-16T08:23:31.2922709Z os.arch=amd64
> 2020-06-16T08:23:31.2923086Z os.version=4.15.0-1083-azure
> 2020-06-16T08:23:31.2923316Z derby.system.home=null
> 2020-06-16T08:23:31.2923616Z 
> derby.stream.error.field=org.apache.flink.connector.jdbc.JdbcTestBase.DEV_NULL
> 2020-06-16T08:23:31.2924790Z Database Class Loader started - 
> derby.database.classpath=''
> 2020-06-16T08:23:37.4354243Z [INFO] Tests run: 2, Failures: 0, Errors: 0, 
> Skipped: 0, Time elapsed: 11.133 s - in 
> org.apache.flink.connector.jdbc.table.JdbcDynamicTableSourceITCase
> 2020-06-16T08:23:38.1880075Z [INFO] Running 
> org.apache.flink.connector.jdbc.table.JdbcTableSourceITCase
> 2020-06-16T08:23:41.3718038Z Tue Jun 16 08:23:41 UTC 2020 Thread[main,5,main] 
> java.lang.NoSuchFieldException: DEV_NULL
> 2020-06-16T08:23:41.4383244Z 
> 
> 2020-06-16T08:23:41.4401761Z Tue Jun 16 08:23:41 UTC 2020:
> 2020-06-16T08:23:41.4402797Z Booting Derby version The Apache Software 
> Foundation - Apache Derby - 10.14.2.0 - (1828579): instance 
> a816c00e-0172-bc3a-103b-0e4b0610 
> 2020-06-16T08:23:41.4403758Z on database directory 
> memory:/__w/1/s/flink-connectors/flink-connector-jdbc/target/test with class 
> loader sun.misc.Launcher$AppClassLoader@677327b6 
> 2020-06-16T08:23:41.4404581Z Loaded from 
> file:/__w/1/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
> 2020-06-16T08:23:41.4404945Z java.vendor=Private Build
> 2020-06-16T08:23:41.4405497Z 
> java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
> 2020-06-16T08:23:41.4406048Z 
> user.dir=/__w/1/s/flink-connectors/flink-connector-jdbc/target
> 2020-06-16T08:23:41.4406303Z os.name=Linux
> 2020-06-16T08:23:41.4406494Z os.arch=amd64
> 2020-06-16T08:23:41.4406878Z os.version=4.15.0-1083-azure
> 2020-06-16T08:23:41.4407097Z derby.system.home=null
> 2020-06-16T08:23:41.4407415Z 
> derby.stream.error.field=org.apache.flink.connector.jdbc.JdbcTestBase.DEV_NULL
> 2020-06-16T08:23:41.5287219Z Database Class Loader started - 
> derby.database.classpath=''
> 2020-06-16T08:23:46.4567063Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 23.729 s <<< FAILURE! - in 
> org.apache.flink.connector.jdbc.table.UnsignedTypeConversionITCase
> 2020-06-16T08:23:46.4575785Z [ERROR] 
> org.apache.flink.connector.jdbc.table.UnsignedTypeConversionITCase  Time 
> elapsed: 23.729 s  <<< ERROR!
> 2020-06-16T08:23:46.4576490Z ch.vorburger.exec.ManagedProcessException: An 
> error occurred while running a command: create database if not exists `test`;
> 2020-06-16T08:23:46.4577193Z  at ch.vorburger.mariadb4j.DB.run(DB.java:300)
> 2020-06-16T08:23:46.4577537Z  at 

[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-08-10 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175159#comment-17175159
 ] 

Dian Fu commented on FLINK-17274:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5373=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=eb5f4d19-2d2d-5856-a4ce-acf5f904a994]

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18807) FlinkKafkaProducerITCase.testScaleUpAfterScalingDown failed with "Timeout expired after 60000milliseconds while awaiting EndTxn(COMMIT)"

2020-08-10 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18807:

Fix Version/s: 1.12.0

> FlinkKafkaProducerITCase.testScaleUpAfterScalingDown failed with "Timeout 
> expired after 6milliseconds while awaiting EndTxn(COMMIT)"
> 
>
> Key: FLINK-18807
> URL: https://issues.apache.org/jira/browse/FLINK-18807
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=5142=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5
> {code}
> 2020-08-03T22:06:45.9078498Z [ERROR] 
> testScaleUpAfterScalingDown(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>   Time elapsed: 76.498 s  <<< ERROR!
> 2020-08-03T22:06:45.9079233Z org.apache.kafka.common.errors.TimeoutException: 
> Timeout expired after 6milliseconds while awaiting EndTxn(COMMIT)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   >