[jira] [Commented] (FLINK-5783) flink-connector-kinesis java.lang.NoClassDefFoundError: org/apache/http/conn/ssl/SSLSocketFactory
[ https://issues.apache.org/jira/browse/FLINK-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970154#comment-15970154 ] Eron Wright commented on FLINK-5783: - I suspect that the root cause of this is FLINK-6125. > flink-connector-kinesis java.lang.NoClassDefFoundError: > org/apache/http/conn/ssl/SSLSocketFactory > - > > Key: FLINK-5783 > URL: https://issues.apache.org/jira/browse/FLINK-5783 > Project: Flink > Issue Type: Bug > Components: Kinesis Connector >Affects Versions: 1.1.3 >Reporter: Huy Huynh >Priority: Trivial > > I got the error below while running flink consumer application the first time > using kinesis connector. I was able to fix it by modifying the > flink-connector-kinesis pom file to use the latest Kinesis and AWS SDK > versions then rebuild the jar. > Updated pom: > flink-connector-kinesis_2.11 > flink-connector-kinesis > > 1.11.86 > 1.7.3 > 0.12.3 > > Errror: > Exception in thread "main" > org.apache.flink.runtime.client.JobExecutionException: Job execution failed. > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply$mcV$sp(JobManager.scala:822) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768) > at > org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24) > at > scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > Caused by: java.lang.NoClassDefFoundError: > org/apache/http/conn/ssl/SSLSocketFactory > at > org.apache.flink.kinesis.shaded.com.amazonaws.AmazonWebServiceClient.(AmazonWebServiceClient.java:136) > at > org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.(AmazonKinesisClient.java:221) > at > org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.AmazonKinesisClient.(AmazonKinesisClient.java:197) > at > org.apache.flink.streaming.connectors.kinesis.util.AWSUtil.createKinesisClient(AWSUtil.java:56) > at > org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.(KinesisProxy.java:124) > at > org.apache.flink.streaming.connectors.kinesis.proxy.KinesisProxy.create(KinesisProxy.java:182) > at > org.apache.flink.streaming.connectors.kinesis.internals.KinesisDataFetcher.(KinesisDataFetcher.java:188) > at > org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer.run(FlinkKinesisConsumer.java:198) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:80) > at > org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:53) > at > org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) > at > org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:266) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:585) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-5697) Add per-shard watermarks for FlinkKinesisConsumer
[ https://issues.apache.org/jira/browse/FLINK-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970149#comment-15970149 ] Eron Wright commented on FLINK-5697: - My understanding is that per-partition watermarks make sense due to ordering guarantees and the assumption of strictly-ascending timestamps per partition in some Kafka app architectures. Given how shards are dynamic in Kinesis, does this functionality make sense here? > Add per-shard watermarks for FlinkKinesisConsumer > - > > Key: FLINK-5697 > URL: https://issues.apache.org/jira/browse/FLINK-5697 > Project: Flink > Issue Type: New Feature > Components: Kinesis Connector, Streaming Connectors >Reporter: Tzu-Li (Gordon) Tai > > It would be nice to let the Kinesis consumer be on-par in functionality with > the Kafka consumer, since they share very similar abstractions. Per-partition > / shard watermarks is something we can add also to the Kinesis consumer. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-5966) Document Running Flink on Kubernetes
[ https://issues.apache.org/jira/browse/FLINK-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970094#comment-15970094 ] StephenWithPH commented on FLINK-5966: -- [~larrywu] It is not. I plan to resume work on this... as usual, things get busier than anticipated. > Document Running Flink on Kubernetes > > > Key: FLINK-5966 > URL: https://issues.apache.org/jira/browse/FLINK-5966 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.2.0, 1.3.0 >Reporter: StephenWithPH >Priority: Minor > Labels: documentation > > There are several good sources of information regarding running prior > versions of Flink in Kubernetes. I was able to follow those and fill in the > gaps to get Flink 1.2 up in K8s. > I plan to document my steps in detail in order to submit a PR. There are > several existing PRs that may improve how Flink runs in containers. (See > [FLINK-5635 Improve Docker tooling|https://github.com/apache/flink/pull/3205] > and [FLINK-5634 Flink should not always redirect > stdout|https://github.com/apache/flink/pull/3204]) > Depending on the timing of those PRs, I may tailor my docs towards Flink 1.3 > in order to reflect those changes. > I'm opening this JIRA issue to begin the process. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink issue #3477: [Flink-3318][cep] Add support for quantifiers to CEP's pa...
Github user kl0u commented on the issue: https://github.com/apache/flink/pull/3477 Hi @eliaslevy , @dawidwys is right, in fact if the state's name is "a" and it has 2 matching events, there will be two returned keys "a_0" and "a_1". This is definitely counterintuitive and the reason it is still there is for backwards compatibility with previous versions. I think this should be addressed but I would also suggest to also start a discussion in the mailing list to see if people are ok with this. Unfortunately we are not aware if anyone is using the `CEP` library at the moment. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3477: [Flink-3318][cep] Add support for quantifiers to CEP's pa...
Github user dawidwys commented on the issue: https://github.com/apache/flink/pull/3477 Right now if the pattern name is "a" the events will be returned with keys: "a[0]", "a[1]" and so on, but agree it may be counterintuitive. Please file a JIRA for it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Closed] (FLINK-5629) Unclosed RandomAccessFile in StaticFileServerHandler#respondAsLeader()
[ https://issues.apache.org/jira/browse/FLINK-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-5629. --- Resolution: Fixed Fix Version/s: 1.3.0 1.3: be26f7ed1e2b97bc2e6ab06d6267f8d542d78aee > Unclosed RandomAccessFile in StaticFileServerHandler#respondAsLeader() > -- > > Key: FLINK-5629 > URL: https://issues.apache.org/jira/browse/FLINK-5629 > Project: Flink > Issue Type: Bug > Components: Webfrontend >Reporter: Ted Yu >Assignee: Chesnay Schepler >Priority: Minor > Fix For: 1.3.0 > > > {code} > final RandomAccessFile raf; > try { > raf = new RandomAccessFile(file, "r"); > ... > long fileLength = raf.length(); > {code} > The RandomAccessFile should be closed upon return from method. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (FLINK-6172) Potentially unclosed RandomAccessFile in HistoryServerStaticFileServerHandler
[ https://issues.apache.org/jira/browse/FLINK-6172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-6172. --- Resolution: Fixed Fix Version/s: 1.3.0 1.3: be26f7ed1e2b97bc2e6ab06d6267f8d542d78aee > Potentially unclosed RandomAccessFile in HistoryServerStaticFileServerHandler > - > > Key: FLINK-6172 > URL: https://issues.apache.org/jira/browse/FLINK-6172 > Project: Flink > Issue Type: Bug > Components: JobManager >Reporter: Ted Yu >Assignee: Chesnay Schepler >Priority: Minor > Fix For: 1.3.0 > > > {code} > try { > raf = new RandomAccessFile(file, "r"); > } catch (FileNotFoundException e) { > StaticFileServerHandler.sendError(ctx, NOT_FOUND); > return; > } > long fileLength = raf.length(); > {code} > raf should be closed in all possible execution paths. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (FLINK-6299) make all IT cases extend from TestLogger
[ https://issues.apache.org/jira/browse/FLINK-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-6299. --- Resolution: Fixed Fix Version/s: 1.3.0 1.3: 06db242ea12936fd62d243a1778bb32ca98e9232 > make all IT cases extend from TestLogger > > > Key: FLINK-6299 > URL: https://issues.apache.org/jira/browse/FLINK-6299 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.3.0 >Reporter: Nico Kruber >Assignee: Nico Kruber >Priority: Minor > Fix For: 1.3.0 > > > Not all of the integration tests extend from {{TestLogger}} but this is a > very helpful tool so the currently running tests are written to the logs as > well as their failures, especially for those tests where errors are often > burried in the logs. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (FLINK-6292) Travis: transfer.sh not accepting uploads via http:// anymore
[ https://issues.apache.org/jira/browse/FLINK-6292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-6292. --- Resolution: Fixed Fix Version/s: 1.3.0 1.3: c96002cef5f1867573a473746241f86b59aeddd2 > Travis: transfer.sh not accepting uploads via http:// anymore > - > > Key: FLINK-6292 > URL: https://issues.apache.org/jira/browse/FLINK-6292 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.2.0, 1.3.0, 1.1.5 >Reporter: Nico Kruber >Assignee: Nico Kruber > Fix For: 1.3.0 > > > The {{travis_mvn_watchdog.sh}} script tries to upload the logs to transfer.sh > but it seems like they do not accept uploads to {{http://transfer.sh}} > anymore and only accept {{https}} nowadays. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (FLINK-6271) NumericBetweenParametersProvider NullPointer
[ https://issues.apache.org/jira/browse/FLINK-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-6271. --- Resolution: Fixed Fix Version/s: 1.3.0 705938e5965a98b17bd6ba3f1e06728a35e4f8a9 > NumericBetweenParametersProvider NullPointer > > > Key: FLINK-6271 > URL: https://issues.apache.org/jira/browse/FLINK-6271 > Project: Flink > Issue Type: Bug > Components: Batch Connectors and Input/Output Formats >Affects Versions: 1.2.0 >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier > Labels: jdbc > Fix For: 1.3.0 > > > creating a NumericBetweenParametersProvider using fetchSize=1000, min=0 and > max= 999 fails with a NP -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6271) NumericBetweenParametersProvider NullPointer
[ https://issues.apache.org/jira/browse/FLINK-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970047#comment-15970047 ] ASF GitHub Bot commented on FLINK-6271: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3686 > NumericBetweenParametersProvider NullPointer > > > Key: FLINK-6271 > URL: https://issues.apache.org/jira/browse/FLINK-6271 > Project: Flink > Issue Type: Bug > Components: Batch Connectors and Input/Output Formats >Affects Versions: 1.2.0 >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier > Labels: jdbc > Fix For: 1.3.0 > > > creating a NumericBetweenParametersProvider using fetchSize=1000, min=0 and > max= 999 fails with a NP -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6299) make all IT cases extend from TestLogger
[ https://issues.apache.org/jira/browse/FLINK-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970049#comment-15970049 ] ASF GitHub Bot commented on FLINK-6299: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3713 > make all IT cases extend from TestLogger > > > Key: FLINK-6299 > URL: https://issues.apache.org/jira/browse/FLINK-6299 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.3.0 >Reporter: Nico Kruber >Assignee: Nico Kruber >Priority: Minor > > Not all of the integration tests extend from {{TestLogger}} but this is a > very helpful tool so the currently running tests are written to the logs as > well as their failures, especially for those tests where errors are often > burried in the logs. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3686: [FLINK-6271][jdbc]Fix nullPointer when there's a s...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3686 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3708: [FLINK-6292] fix transfer.sh upload by using https
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3708 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6292) Travis: transfer.sh not accepting uploads via http:// anymore
[ https://issues.apache.org/jira/browse/FLINK-6292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970046#comment-15970046 ] ASF GitHub Bot commented on FLINK-6292: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3708 > Travis: transfer.sh not accepting uploads via http:// anymore > - > > Key: FLINK-6292 > URL: https://issues.apache.org/jira/browse/FLINK-6292 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.2.0, 1.3.0, 1.1.5 >Reporter: Nico Kruber >Assignee: Nico Kruber > > The {{travis_mvn_watchdog.sh}} script tries to upload the logs to transfer.sh > but it seems like they do not accept uploads to {{http://transfer.sh}} > anymore and only accept {{https}} nowadays. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-5629) Unclosed RandomAccessFile in StaticFileServerHandler#respondAsLeader()
[ https://issues.apache.org/jira/browse/FLINK-5629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970045#comment-15970045 ] ASF GitHub Bot commented on FLINK-5629: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3678 > Unclosed RandomAccessFile in StaticFileServerHandler#respondAsLeader() > -- > > Key: FLINK-5629 > URL: https://issues.apache.org/jira/browse/FLINK-5629 > Project: Flink > Issue Type: Bug > Components: Webfrontend >Reporter: Ted Yu >Assignee: Chesnay Schepler >Priority: Minor > > {code} > final RandomAccessFile raf; > try { > raf = new RandomAccessFile(file, "r"); > ... > long fileLength = raf.length(); > {code} > The RandomAccessFile should be closed upon return from method. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970048#comment-15970048 ] ASF GitHub Bot commented on FLINK-6143: --- Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3710 > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Affects Versions: 1.3.0 >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > Fix For: 1.3.0 > > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3710: [FLINK-6143] [clients] Fix unprotected access to t...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3710 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink pull request #3713: [FLINK-6299] make all IT cases extend from TestLog...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3713 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler updated FLINK-6143: Affects Version/s: 1.3.0 > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Affects Versions: 1.3.0 >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > Fix For: 1.3.0 > > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3678: [FLINK-5629] [runtime-web] Close RAF in FileServer...
Github user asfgit closed the pull request at: https://github.com/apache/flink/pull/3678 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Closed] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-6143. --- Resolution: Fixed Fix Version/s: 1.3.0 283f5efd50bdb3e94cc947d1edab6fc0c8cbc77e > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Affects Versions: 1.3.0 >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > Fix For: 1.3.0 > > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink issue #3477: [Flink-3318][cep] Add support for quantifiers to CEP's pa...
Github user eliaslevy commented on the issue: https://github.com/apache/flink/pull/3477 Am I missing something or is there no way to get access to access to multiple events matched by these new quantifiers? The `PatternSelectFunction.select` takes an argument of `Map` and not `Map `. Which is the one being passed to `select` in the `Map`? The last matched? The first matched? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6307) Refactor JDBC tests
[ https://issues.apache.org/jira/browse/FLINK-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970011#comment-15970011 ] ASF GitHub Bot commented on FLINK-6307: --- GitHub user zentol opened a pull request: https://github.com/apache/flink/pull/3723 [FLINK-6307] [jdbc] Refactor JDBC tests Builds on top of #3686. List of changes: JDBCFullTest: - split testJdbcInOut into 2 methods to avoid manul test-lifecycle calls JDBCTestBase: - remove all qualified static accesses - remove static Connection field - remove (now) unused prepareTestDB method - create RowTypeInfo directly instead of first allocating a separate TypeInfo[] - rename testData to TEST_DATA in-line with naming conventions - rework test data to not rely on Object arrays JDBCInputFormatTest: - call InputFormat#closeInputFormat() in tearDown() - simplify method exception declarations - remove unreachable branch when format returns null (this should fail the test) - replace loops over splits with for-each loops - rework comparisons; no longer ignore nulls, no longer check class, compare directly against expected value JDBCOutputFormatTest: - directly create Row instead of first creating a tuple - simplify method exception declarations General: - do not catch exceptions if the catch block only calls Assert.fail() You can merge this pull request into a Git repository by running: $ git pull https://github.com/zentol/flink 6307_jdbc_tests Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3723.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3723 commit 993a712eaff4b7b29dbcd45897e3afe7323256a7 Author: Flavio PompermaierDate: 2017-04-06T10:01:51Z [FLINK-6271] [jdbc] Fix NPE when there's a single split This closes #3686. commit fb510ce440577d07b7cd7229db1a624a150e66b0 Author: zentol Date: 2017-04-15T16:07:15Z [FLINK-6307] [jdbc] Refactor JDBC tests JDBCFullTest: - split testJdbcInOut into 2 methods to avoid manul test-lifecycle calls JDBCTestBase: - remove all qualified static accesses - remove static Connection field - remove (now) unused prepareTestDB method - create RowTypeInfo directly instead of first allocating a separate TypeInfo[] - rename testData to TEST_DATA in-line with naming conventions - rework test data to not rely on Object arrays JDBCInputFormatTest: - call InputFormat#closeInputFormat() in tearDown() - simplify method exception declarations - remove unreachable branch when format returns null (this should fail the test) - replace loops over splits with for-each loops - rework comparisons; no longer ignore nulls, no longer check class, compare directly against expected value JDBCOutputFormatTest: - directly create Row instead of first creating a tuple - simplify method exception declarations General: - do not catch exceptions if the catch block only calls Assert.fail() > Refactor JDBC tests > --- > > Key: FLINK-6307 > URL: https://issues.apache.org/jira/browse/FLINK-6307 > Project: Flink > Issue Type: Improvement > Components: Batch Connectors and Input/Output Formats, Tests >Affects Versions: 1.3.0 >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Minor > Fix For: 1.3.0 > > > While glancing over the JDBC related tests I've found a lot of odds things > that accumulated over time. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3723: [FLINK-6307] [jdbc] Refactor JDBC tests
GitHub user zentol opened a pull request: https://github.com/apache/flink/pull/3723 [FLINK-6307] [jdbc] Refactor JDBC tests Builds on top of #3686. List of changes: JDBCFullTest: - split testJdbcInOut into 2 methods to avoid manul test-lifecycle calls JDBCTestBase: - remove all qualified static accesses - remove static Connection field - remove (now) unused prepareTestDB method - create RowTypeInfo directly instead of first allocating a separate TypeInfo[] - rename testData to TEST_DATA in-line with naming conventions - rework test data to not rely on Object arrays JDBCInputFormatTest: - call InputFormat#closeInputFormat() in tearDown() - simplify method exception declarations - remove unreachable branch when format returns null (this should fail the test) - replace loops over splits with for-each loops - rework comparisons; no longer ignore nulls, no longer check class, compare directly against expected value JDBCOutputFormatTest: - directly create Row instead of first creating a tuple - simplify method exception declarations General: - do not catch exceptions if the catch block only calls Assert.fail() You can merge this pull request into a Git repository by running: $ git pull https://github.com/zentol/flink 6307_jdbc_tests Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3723.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3723 commit 993a712eaff4b7b29dbcd45897e3afe7323256a7 Author: Flavio PompermaierDate: 2017-04-06T10:01:51Z [FLINK-6271] [jdbc] Fix NPE when there's a single split This closes #3686. commit fb510ce440577d07b7cd7229db1a624a150e66b0 Author: zentol Date: 2017-04-15T16:07:15Z [FLINK-6307] [jdbc] Refactor JDBC tests JDBCFullTest: - split testJdbcInOut into 2 methods to avoid manul test-lifecycle calls JDBCTestBase: - remove all qualified static accesses - remove static Connection field - remove (now) unused prepareTestDB method - create RowTypeInfo directly instead of first allocating a separate TypeInfo[] - rename testData to TEST_DATA in-line with naming conventions - rework test data to not rely on Object arrays JDBCInputFormatTest: - call InputFormat#closeInputFormat() in tearDown() - simplify method exception declarations - remove unreachable branch when format returns null (this should fail the test) - replace loops over splits with for-each loops - rework comparisons; no longer ignore nulls, no longer check class, compare directly against expected value JDBCOutputFormatTest: - directly create Row instead of first creating a tuple - simplify method exception declarations General: - do not catch exceptions if the catch block only calls Assert.fail() --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (FLINK-6307) Refactor JDBC tests
Chesnay Schepler created FLINK-6307: --- Summary: Refactor JDBC tests Key: FLINK-6307 URL: https://issues.apache.org/jira/browse/FLINK-6307 Project: Flink Issue Type: Improvement Components: Batch Connectors and Input/Output Formats, Tests Affects Versions: 1.3.0 Reporter: Chesnay Schepler Assignee: Chesnay Schepler Priority: Minor Fix For: 1.3.0 While glancing over the JDBC related tests I've found a lot of odds things that accumulated over time. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969993#comment-15969993 ] ASF GitHub Bot commented on FLINK-6143: --- Github user zhangminglei commented on the issue: https://github.com/apache/flink/pull/3710 @zentol I am appreciate it. > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink issue #3710: [FLINK-6143] [clients] Fix unprotected access to this.fli...
Github user zhangminglei commented on the issue: https://github.com/apache/flink/pull/3710 @zentol I am appreciate it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3710: [FLINK-6143] [clients] Fix unprotected access to this.fli...
Github user zentol commented on the issue: https://github.com/apache/flink/pull/3710 merging. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969992#comment-15969992 ] ASF GitHub Bot commented on FLINK-6143: --- Github user zentol commented on the issue: https://github.com/apache/flink/pull/3710 merging. > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969987#comment-15969987 ] ASF GitHub Bot commented on FLINK-6143: --- Github user zhangminglei commented on the issue: https://github.com/apache/flink/pull/3710 @zentol I have updated the code. Thanks again. > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink issue #3710: [FLINK-6143] [clients] Fix unprotected access to this.fli...
Github user zhangminglei commented on the issue: https://github.com/apache/flink/pull/3710 @zentol I have updated the code. Thanks again. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969984#comment-15969984 ] ASF GitHub Bot commented on FLINK-6143: --- Github user zhangminglei commented on a diff in the pull request: https://github.com/apache/flink/pull/3710#discussion_r111666520 --- Diff: flink-clients/src/main/java/org/apache/flink/client/LocalExecutor.java --- @@ -59,7 +59,7 @@ private final Object lock = new Object(); /** The mini cluster on which to execute the local programs */ - private LocalFlinkMiniCluster flink; + private volatile LocalFlinkMiniCluster flink; --- End diff -- @zentol > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3710: [FLINK-6143] [clients] Fix unprotected access to t...
Github user zhangminglei commented on a diff in the pull request: https://github.com/apache/flink/pull/3710#discussion_r111666520 --- Diff: flink-clients/src/main/java/org/apache/flink/client/LocalExecutor.java --- @@ -59,7 +59,7 @@ private final Object lock = new Object(); /** The mini cluster on which to execute the local programs */ - private LocalFlinkMiniCluster flink; + private volatile LocalFlinkMiniCluster flink; --- End diff -- @zentol --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969983#comment-15969983 ] ASF GitHub Bot commented on FLINK-6143: --- Github user zhangminglei commented on a diff in the pull request: https://github.com/apache/flink/pull/3710#discussion_r111666373 --- Diff: flink-clients/src/main/java/org/apache/flink/client/LocalExecutor.java --- @@ -59,7 +59,7 @@ private final Object lock = new Object(); /** The mini cluster on which to execute the local programs */ - private LocalFlinkMiniCluster flink; + private volatile LocalFlinkMiniCluster flink; --- End diff -- Thanks for telling me so useful message. Appreciate it. > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3710: [FLINK-6143] [clients] Fix unprotected access to t...
Github user zhangminglei commented on a diff in the pull request: https://github.com/apache/flink/pull/3710#discussion_r111666373 --- Diff: flink-clients/src/main/java/org/apache/flink/client/LocalExecutor.java --- @@ -59,7 +59,7 @@ private final Object lock = new Object(); /** The mini cluster on which to execute the local programs */ - private LocalFlinkMiniCluster flink; + private volatile LocalFlinkMiniCluster flink; --- End diff -- Thanks for telling me so useful message. Appreciate it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-5646) REST api documentation missing details on jar upload
[ https://issues.apache.org/jira/browse/FLINK-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969981#comment-15969981 ] ASF GitHub Bot commented on FLINK-5646: --- GitHub user hamstah opened a pull request: https://github.com/apache/flink/pull/3722 [FLINK-5646] Document JAR upload with the REST API Hi, It took me a while to get my upload working so I thought I would fix the documentation to help others. I wasn't sure if code examples in other languages where allowed so I didn't include any, but I have a python example I could add if you think it would be useful. Thanks! You can merge this pull request into a Git repository by running: $ git pull https://github.com/hamstah/flink flink-5646-update-upload-docs Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3722.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3722 commit 41b52f20cb64125fb4453a4adfbee3c4dbd7acae Author: hamstahDate: 2017-04-15T14:34:14Z [FLINK-5646] Document JAR upload with the REST API > REST api documentation missing details on jar upload > > > Key: FLINK-5646 > URL: https://issues.apache.org/jira/browse/FLINK-5646 > Project: Flink > Issue Type: Bug > Components: Documentation >Reporter: Cliff Resnick >Priority: Minor > > The 1.2 release documentation > (https://ci.apache.org/projects/flink/flink-docs-release-1.2/monitoring/rest_api.html) > states "It is possible to upload, run, and list Flink programs via the REST > APIs and web frontend". However there is no documentation about uploading a > jar via REST api. > There should be something to the effect of: > "You can upload a jar file using http post with the file data sent under a > form field 'jarfile'." -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3722: [FLINK-5646] Document JAR upload with the REST API
GitHub user hamstah opened a pull request: https://github.com/apache/flink/pull/3722 [FLINK-5646] Document JAR upload with the REST API Hi, It took me a while to get my upload working so I thought I would fix the documentation to help others. I wasn't sure if code examples in other languages where allowed so I didn't include any, but I have a python example I could add if you think it would be useful. Thanks! You can merge this pull request into a Git repository by running: $ git pull https://github.com/hamstah/flink flink-5646-update-upload-docs Alternatively you can review and apply these changes as the patch at: https://github.com/apache/flink/pull/3722.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3722 commit 41b52f20cb64125fb4453a4adfbee3c4dbd7acae Author: hamstahDate: 2017-04-15T14:34:14Z [FLINK-5646] Document JAR upload with the REST API --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-4604) Add support for standard deviation/variance
[ https://issues.apache.org/jira/browse/FLINK-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969972#comment-15969972 ] ASF GitHub Bot commented on FLINK-4604: --- Github user ex00 commented on the issue: https://github.com/apache/flink/pull/3260 Hello, I updated PR for calcite 1.12. > Add support for standard deviation/variance > --- > > Key: FLINK-4604 > URL: https://issues.apache.org/jira/browse/FLINK-4604 > Project: Flink > Issue Type: New Feature > Components: Table API & SQL >Reporter: Timo Walther >Assignee: Anton Mushin > Attachments: 1.jpg > > > Calcite's {{AggregateReduceFunctionsRule}} can convert SQL {{AVG, STDDEV_POP, > STDDEV_SAMP, VAR_POP, VAR_SAMP}} to sum/count functions. We should add, test > and document this rule. > If we also want to add this aggregates to Table API is up for discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink issue #3260: [FLINK-4604] Add support for standard deviation/variance
Github user ex00 commented on the issue: https://github.com/apache/flink/pull/3260 Hello, I updated PR for calcite 1.12. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6271) NumericBetweenParametersProvider NullPointer
[ https://issues.apache.org/jira/browse/FLINK-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969936#comment-15969936 ] ASF GitHub Bot commented on FLINK-6271: --- Github user zentol commented on the issue: https://github.com/apache/flink/pull/3686 merging. > NumericBetweenParametersProvider NullPointer > > > Key: FLINK-6271 > URL: https://issues.apache.org/jira/browse/FLINK-6271 > Project: Flink > Issue Type: Bug > Components: Batch Connectors and Input/Output Formats >Affects Versions: 1.2.0 >Reporter: Flavio Pompermaier >Assignee: Flavio Pompermaier > Labels: jdbc > > creating a NumericBetweenParametersProvider using fetchSize=1000, min=0 and > max= 999 fails with a NP -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink issue #3686: [FLINK-6271][jdbc]Fix nullPointer when there's a single s...
Github user zentol commented on the issue: https://github.com/apache/flink/pull/3686 merging. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6143) Unprotected access to this.flink in LocalExecutor#endSession()
[ https://issues.apache.org/jira/browse/FLINK-6143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969925#comment-15969925 ] ASF GitHub Bot commented on FLINK-6143: --- Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/3710#discussion_r111663699 --- Diff: flink-clients/src/main/java/org/apache/flink/client/LocalExecutor.java --- @@ -59,7 +59,7 @@ private final Object lock = new Object(); /** The mini cluster on which to execute the local programs */ - private LocalFlinkMiniCluster flink; + private volatile LocalFlinkMiniCluster flink; --- End diff -- If we guard all accesses with locks this doesn't have to be volatile. > Unprotected access to this.flink in LocalExecutor#endSession() > -- > > Key: FLINK-6143 > URL: https://issues.apache.org/jira/browse/FLINK-6143 > Project: Flink > Issue Type: Bug > Components: Client >Reporter: Ted Yu >Assignee: mingleizhang >Priority: Minor > > {code} > public void endSession(JobID jobID) throws Exception { > LocalFlinkMiniCluster flink = this.flink; > if (flink != null) { > {code} > The flink field is not declared volatile and access to this.flink doesn't > hold the LocalExecutor.lock -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink pull request #3710: [FLINK-6143] [clients] Fix unprotected access to t...
Github user zentol commented on a diff in the pull request: https://github.com/apache/flink/pull/3710#discussion_r111663699 --- Diff: flink-clients/src/main/java/org/apache/flink/client/LocalExecutor.java --- @@ -59,7 +59,7 @@ private final Object lock = new Object(); /** The mini cluster on which to execute the local programs */ - private LocalFlinkMiniCluster flink; + private volatile LocalFlinkMiniCluster flink; --- End diff -- If we guard all accesses with locks this doesn't have to be volatile. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] flink issue #3708: [FLINK-6292] fix transfer.sh upload by using https
Github user zentol commented on the issue: https://github.com/apache/flink/pull/3708 merging. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6292) Travis: transfer.sh not accepting uploads via http:// anymore
[ https://issues.apache.org/jira/browse/FLINK-6292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969923#comment-15969923 ] ASF GitHub Bot commented on FLINK-6292: --- Github user zentol commented on the issue: https://github.com/apache/flink/pull/3708 merging. > Travis: transfer.sh not accepting uploads via http:// anymore > - > > Key: FLINK-6292 > URL: https://issues.apache.org/jira/browse/FLINK-6292 > Project: Flink > Issue Type: Bug > Components: Tests >Affects Versions: 1.2.0, 1.3.0, 1.1.5 >Reporter: Nico Kruber >Assignee: Nico Kruber > > The {{travis_mvn_watchdog.sh}} script tries to upload the logs to transfer.sh > but it seems like they do not accept uploads to {{http://transfer.sh}} > anymore and only accept {{https}} nowadays. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] flink issue #3713: [FLINK-6299] make all IT cases extend from TestLogger
Github user zentol commented on the issue: https://github.com/apache/flink/pull/3713 merging. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (FLINK-6299) make all IT cases extend from TestLogger
[ https://issues.apache.org/jira/browse/FLINK-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969921#comment-15969921 ] ASF GitHub Bot commented on FLINK-6299: --- Github user zentol commented on the issue: https://github.com/apache/flink/pull/3713 merging. > make all IT cases extend from TestLogger > > > Key: FLINK-6299 > URL: https://issues.apache.org/jira/browse/FLINK-6299 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.3.0 >Reporter: Nico Kruber >Assignee: Nico Kruber >Priority: Minor > > Not all of the integration tests extend from {{TestLogger}} but this is a > very helpful tool so the currently running tests are written to the logs as > well as their failures, especially for those tests where errors are often > burried in the logs. -- This message was sent by Atlassian JIRA (v6.3.15#6346)