[jira] [Resolved] (FLINK-32025) Make job cancellation button on UI configurable
[ https://issues.apache.org/jira/browse/FLINK-32025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved FLINK-32025. Resolution: Duplicate > Make job cancellation button on UI configurable > --- > > Key: FLINK-32025 > URL: https://issues.apache.org/jira/browse/FLINK-32025 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Priority: Major > > On the flink job UI, there is `Cancel Job` button. > When the job UI is shown to users, it is desirable to hide the button so that > normal user doesn't mistakenly cancel a long running flink job. > This issue adds configuration for hiding the `Cancel Job` button. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32025) Make job cancellation button on UI configurable
Ted Yu created FLINK-32025: -- Summary: Make job cancellation button on UI configurable Key: FLINK-32025 URL: https://issues.apache.org/jira/browse/FLINK-32025 Project: Flink Issue Type: Improvement Reporter: Ted Yu On the flink job UI, there is `Cancel Job` button. When the job UI is shown to users, it is desirable to hide the button so that normal user doesn't mistakenly cancel a long running flink job. This issue adds configuration for hiding the `Cancel Job` button. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-10446) Use the "guava beta checker" plugin to keep off of @Beta API
[ https://issues.apache.org/jira/browse/FLINK-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved FLINK-10446. Resolution: Won't Fix > Use the "guava beta checker" plugin to keep off of @Beta API > > > Key: FLINK-10446 > URL: https://issues.apache.org/jira/browse/FLINK-10446 > Project: Flink > Issue Type: Task > Components: Build System >Reporter: Ted Yu >Assignee: Ji Liu >Priority: Major > > The Guava people publish an Error Prone plugin to detect when stuff that's > annotated with @Beta gets used. Those things shouldn't be used because the > project gives no promises about deprecating before removal. > plugin: > https://github.com/google/guava-beta-checker -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (FLINK-10468) Potential missing break for PARTITION_CUSTOM in OutputEmitter ctor
[ https://issues.apache.org/jira/browse/FLINK-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu resolved FLINK-10468. Resolution: Not A Bug > Potential missing break for PARTITION_CUSTOM in OutputEmitter ctor > -- > > Key: FLINK-10468 > URL: https://issues.apache.org/jira/browse/FLINK-10468 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > > Here is related code: > {code} > switch (strategy) { > case PARTITION_CUSTOM: > extractedKeys = new Object[1]; > case FORWARD: > {code} > It seems a 'break' is missing prior to FORWARD case. > {code} > if (strategy == ShipStrategyType.PARTITION_CUSTOM && partitioner == null) > { > throw new NullPointerException("Partitioner must not be null when the > ship strategy is set to custom partitioning."); > } > {code} > Since the above check is for PARTITION_CUSTOM, it seems we can place the > check in the switch statement. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7588) Document RocksDB tuning for spinning disks
[ https://issues.apache.org/jira/browse/FLINK-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258309#comment-16258309 ] Ted Yu edited comment on FLINK-7588 at 10/2/18 1:56 AM: bq. Be careful about whether you have enough memory to keep all bloom filters Other than the above being tricky, the other guidelines are actionable . was (Author: yuzhih...@gmail.com): bq. Be careful about whether you have enough memory to keep all bloom filters Other than the above being tricky, the other guidelines are actionable. > Document RocksDB tuning for spinning disks > -- > > Key: FLINK-7588 > URL: https://issues.apache.org/jira/browse/FLINK-7588 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Ted Yu >Priority: Major > Labels: performance > > In docs/ops/state/large_state_tuning.md , it was mentioned that: > bq. the default configuration is tailored towards SSDs and performs > suboptimal on spinning disks > We should add recommendation targeting spinning disks: > https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide#difference-of-spinning-disk -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9824) Support IPv6 literal
[ https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9824: -- Description: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. was: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. > Support IPv6 literal > > > Key: FLINK-9824 > URL: https://issues.apache.org/jira/browse/FLINK-9824 > Project: Flink > Issue Type: Bug > Components: Network >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Currently we use colon as separator when parsing host and port. > We should support the usage of IPv6 literals in parsing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13
[ https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9924: -- Description: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment was: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment > Upgrade zookeeper to 3.4.13 > --- > > Key: FLINK-9924 > URL: https://issues.apache.org/jira/browse/FLINK-9924 > Project: Flink > Issue Type: Task >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > zookeeper 3.4.13 is being released. > ZOOKEEPER-2959 fixes data loss when observer is used > ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container > / cloud) environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-10391) MillisOfDay is used in place of instant for LocalTime ctor in AvroKryoSerializerUtils
[ https://issues.apache.org/jira/browse/FLINK-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-10391: --- Description: >From the JodaLocalTimeSerializer#write, we serialize getMillisOfDay() value >from LocalTime. For read method: {code} final int time = input.readInt(true); return new LocalTime(time, ISOChronology.getInstanceUTC().withZone(DateTimeZone.UTC)); {code} It seems http://joda-time.sourceforge.net/apidocs/org/joda/time/LocalTime.html#fromMillisOfDay(long,%20org.joda.time.Chronology) should be used instead. was: >From the JodaLocalTimeSerializer#write, we serialize getMillisOfDay() value >from LocalTime. For read method: {code} final int time = input.readInt(true); return new LocalTime(time, ISOChronology.getInstanceUTC().withZone(DateTimeZone.UTC)); {code} It seems http://joda-time.sourceforge.net/apidocs/org/joda/time/LocalTime.html#fromMillisOfDay(long,%20org.joda.time.Chronology) should be used instead. > MillisOfDay is used in place of instant for LocalTime ctor in > AvroKryoSerializerUtils > - > > Key: FLINK-10391 > URL: https://issues.apache.org/jira/browse/FLINK-10391 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > > From the JodaLocalTimeSerializer#write, we serialize getMillisOfDay() value > from LocalTime. > For read method: > {code} > final int time = input.readInt(true); > return new LocalTime(time, > ISOChronology.getInstanceUTC().withZone(DateTimeZone.UTC)); > {code} > It seems > http://joda-time.sourceforge.net/apidocs/org/joda/time/LocalTime.html#fromMillisOfDay(long,%20org.joda.time.Chronology) > should be used instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-10468) Potential missing break for PARTITION_CUSTOM in OutputEmitter ctor
[ https://issues.apache.org/jira/browse/FLINK-10468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16633416#comment-16633416 ] Ted Yu commented on FLINK-10468: That was why I started the issue title with Potential. Even if this is the case, assigning {{channels}} and breaking would make the code easier to understand for other people. Or, a comment should be added stating the fact. > Potential missing break for PARTITION_CUSTOM in OutputEmitter ctor > -- > > Key: FLINK-10468 > URL: https://issues.apache.org/jira/browse/FLINK-10468 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Priority: Minor > > Here is related code: > {code} > switch (strategy) { > case PARTITION_CUSTOM: > extractedKeys = new Object[1]; > case FORWARD: > {code} > It seems a 'break' is missing prior to FORWARD case. > {code} > if (strategy == ShipStrategyType.PARTITION_CUSTOM && partitioner == null) > { > throw new NullPointerException("Partitioner must not be null when the > ship strategy is set to custom partitioning."); > } > {code} > Since the above check is for PARTITION_CUSTOM, it seems we can place the > check in the switch statement. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10468) Potential missing break for PARTITION_CUSTOM in OutputEmitter ctor
Ted Yu created FLINK-10468: -- Summary: Potential missing break for PARTITION_CUSTOM in OutputEmitter ctor Key: FLINK-10468 URL: https://issues.apache.org/jira/browse/FLINK-10468 Project: Flink Issue Type: Bug Reporter: Ted Yu Here is related code: {code} switch (strategy) { case PARTITION_CUSTOM: extractedKeys = new Object[1]; case FORWARD: {code} It seems a 'break' is missing prior to FORWARD case. {code} if (strategy == ShipStrategyType.PARTITION_CUSTOM && partitioner == null) { throw new NullPointerException("Partitioner must not be null when the ship strategy is set to custom partitioning."); } {code} Since the above check is for PARTITION_CUSTOM, it seems we can place the check in the switch statement. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10467) Upgrade commons-compress to 1.18
Ted Yu created FLINK-10467: -- Summary: Upgrade commons-compress to 1.18 Key: FLINK-10467 URL: https://issues.apache.org/jira/browse/FLINK-10467 Project: Flink Issue Type: Task Reporter: Ted Yu org.apache.commons:commons-compress defines an API for working with compression and archive formats. Affected versions of this package are vulnerable to Directory Traversal. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-10388) RestClientTest sometimes fails with AssertionError
[ https://issues.apache.org/jira/browse/FLINK-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16632560#comment-16632560 ] Ted Yu commented on FLINK-10388: I don't see 'Network unreachable' error in FLINK-4052 > RestClientTest sometimes fails with AssertionError > -- > > Key: FLINK-10388 > URL: https://issues.apache.org/jira/browse/FLINK-10388 > Project: Flink > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > Running the test on Linux I got: > {code} > testConnectionTimeout(org.apache.flink.runtime.rest.RestClientTest) Time > elapsed: 1.918 sec <<< FAILURE! > java.lang.AssertionError: > Expected: an instance of > org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException > but: > Network is unreachable: /10.255.255.1:80> is a > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException > at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) > at org.junit.Assert.assertThat(Assert.java:956) > at org.junit.Assert.assertThat(Assert.java:923) > at > org.apache.flink.runtime.rest.RestClientTest.testConnectionTimeout(RestClientTest.java:69) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9048) LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers sometimes fails
[ https://issues.apache.org/jira/browse/FLINK-9048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9048: -- Description: As of commit e0bc37bef69f5376d03214578e9b95816add661b, I got the following : {code} testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase) Time elapsed: 41.681 sec <<< FAILURE! java.lang.AssertionError: Thread Thread[ForkJoinPool.commonPool-worker-25,5,main] was started by the mini cluster, but not shut down at org.junit.Assert.fail(Assert.java:88) at org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:174) {code} was: As of commit e0bc37bef69f5376d03214578e9b95816add661b, I got the following : {code} testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase) Time elapsed: 41.681 sec <<< FAILURE! java.lang.AssertionError: Thread Thread[ForkJoinPool.commonPool-worker-25,5,main] was started by the mini cluster, but not shut down at org.junit.Assert.fail(Assert.java:88) at org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:174) {code} > LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers > sometimes fails > - > > Key: FLINK-9048 > URL: https://issues.apache.org/jira/browse/FLINK-9048 > Project: Flink > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > As of commit e0bc37bef69f5376d03214578e9b95816add661b, I got the following : > {code} > testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase) > Time elapsed: 41.681 sec <<< FAILURE! > java.lang.AssertionError: Thread > Thread[ForkJoinPool.commonPool-worker-25,5,main] was started by the mini > cluster, but not shut down > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:174) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10446) Use the "guava beta checker" plugin to keep off of @Beta API
Ted Yu created FLINK-10446: -- Summary: Use the "guava beta checker" plugin to keep off of @Beta API Key: FLINK-10446 URL: https://issues.apache.org/jira/browse/FLINK-10446 Project: Flink Issue Type: Task Reporter: Ted Yu The Guava people publish an Error Prone plugin to detect when stuff that's annotated with @Beta gets used. Those things shouldn't be used because the project gives no promises about deprecating before removal. plugin: https://github.com/google/guava-beta-checker -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-10228) Add metrics for netty direct memory consumption
[ https://issues.apache.org/jira/browse/FLINK-10228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-10228: --- Description: netty direct memory usage can be exposed via metrics so that operator can keep track of memory consumption . (was: netty direct memory usage can be exposed via metrics so that operator can keep track of memory consumption.) > Add metrics for netty direct memory consumption > --- > > Key: FLINK-10228 > URL: https://issues.apache.org/jira/browse/FLINK-10228 > Project: Flink > Issue Type: Improvement > Components: Metrics >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > netty direct memory usage can be exposed via metrics so that operator can > keep track of memory consumption . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9363) Bump up the Jackson version
[ https://issues.apache.org/jira/browse/FLINK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9363: -- Description: CVE's for Jackson : CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 was: CVE's for Jackson : CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 > Bump up the Jackson version > --- > > Key: FLINK-9363 > URL: https://issues.apache.org/jira/browse/FLINK-9363 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > Labels: security > > CVE's for Jackson : > CVE-2017-17485 > CVE-2018-5968 > CVE-2018-7489 > We can upgrade to 2.9.5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (FLINK-8037) Missing cast in integer arithmetic in TransactionalIdsGenerator#generateIdsToAbort
[ https://issues.apache.org/jira/browse/FLINK-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-8037: -- Comment: was deleted (was: Please rebase PR.) > Missing cast in integer arithmetic in > TransactionalIdsGenerator#generateIdsToAbort > -- > > Key: FLINK-8037 > URL: https://issues.apache.org/jira/browse/FLINK-8037 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: Greg Hogan >Priority: Minor > Labels: kafka, kafka-connect > > {code} > public Set generateIdsToAbort() { > Set idsToAbort = new HashSet<>(); > for (int i = 0; i < safeScaleDownFactor; i++) { > idsToAbort.addAll(generateIdsToUse(i * poolSize * > totalNumberOfSubtasks)); > {code} > The operands are integers where generateIdsToUse() expects long parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9824) Support IPv6 literal
[ https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9824: -- Description: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. was: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. > Support IPv6 literal > > > Key: FLINK-9824 > URL: https://issues.apache.org/jira/browse/FLINK-9824 > Project: Flink > Issue Type: Bug > Components: Network >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Currently we use colon as separator when parsing host and port. > We should support the usage of IPv6 literals in parsing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-8554) Upgrade AWS SDK
[ https://issues.apache.org/jira/browse/FLINK-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-8554: -- Description: AWS SDK 1.11.271 fixes a lot of bugs. One of which would exhibit the following: {code} Caused by: java.lang.NullPointerException at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729) at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67) at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) {code} AWS SDK 1.11.375 has been released. https://aws.amazon.com/about-aws/whats-new/2018/03/longer-role-sessions/ was: AWS SDK 1.11.271 fixes a lot of bugs. One of which would exhibit the following: {code} Caused by: java.lang.NullPointerException at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729) at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67) at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) {code} > Upgrade AWS SDK > --- > > Key: FLINK-8554 > URL: https://issues.apache.org/jira/browse/FLINK-8554 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > AWS SDK 1.11.271 fixes a lot of bugs. > One of which would exhibit the following: > {code} > Caused by: java.lang.NullPointerException > at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729) > at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67) > at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} > AWS SDK 1.11.375 has been released. > https://aws.amazon.com/about-aws/whats-new/2018/03/longer-role-sessions/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10391) MillisOfDay is used in place of instant for LocalTime ctor in AvroKryoSerializerUtils
Ted Yu created FLINK-10391: -- Summary: MillisOfDay is used in place of instant for LocalTime ctor in AvroKryoSerializerUtils Key: FLINK-10391 URL: https://issues.apache.org/jira/browse/FLINK-10391 Project: Flink Issue Type: Bug Reporter: Ted Yu >From the JodaLocalTimeSerializer#write, we serialize getMillisOfDay() value >from LocalTime. For read method: {code} final int time = input.readInt(true); return new LocalTime(time, ISOChronology.getInstanceUTC().withZone(DateTimeZone.UTC)); {code} It seems http://joda-time.sourceforge.net/apidocs/org/joda/time/LocalTime.html#fromMillisOfDay(long,%20org.joda.time.Chronology) should be used instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10389) TaskManagerServicesConfiguration ctor contains self assignment
Ted Yu created FLINK-10389: -- Summary: TaskManagerServicesConfiguration ctor contains self assignment Key: FLINK-10389 URL: https://issues.apache.org/jira/browse/FLINK-10389 Project: Flink Issue Type: Task Reporter: Ted Yu TaskManagerServicesConfiguration has: {code} this.systemResourceMetricsEnabled = systemResourceMetricsEnabled; {code} There is no systemResourceMetricsEnabled parameter to the ctor. This was reported by findbugs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10388) RestClientTest sometimes fails with AssertionError
Ted Yu created FLINK-10388: -- Summary: RestClientTest sometimes fails with AssertionError Key: FLINK-10388 URL: https://issues.apache.org/jira/browse/FLINK-10388 Project: Flink Issue Type: Test Reporter: Ted Yu Running the test on Linux I got: {code} testConnectionTimeout(org.apache.flink.runtime.rest.RestClientTest) Time elapsed: 1.918 sec <<< FAILURE! java.lang.AssertionError: Expected: an instance of org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException but: is a org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at org.junit.Assert.assertThat(Assert.java:956) at org.junit.Assert.assertThat(Assert.java:923) at org.apache.flink.runtime.rest.RestClientTest.testConnectionTimeout(RestClientTest.java:69) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7795) Utilize error-prone to discover common coding mistakes
[ https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345955#comment-16345955 ] Ted Yu edited comment on FLINK-7795 at 9/20/18 4:30 PM: error-prone has JDK 8 dependency . was (Author: yuzhih...@gmail.com): error-prone has JDK 8 dependency. > Utilize error-prone to discover common coding mistakes > -- > > Key: FLINK-7795 > URL: https://issues.apache.org/jira/browse/FLINK-7795 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Priority: Major > > http://errorprone.info/ is a tool which detects common coding mistakes. > We should incorporate into Flink build process. > Here are the dependencies: > {code} > > com.google.errorprone > error_prone_annotation > ${error-prone.version} > provided > > > > com.google.auto.service > auto-service > 1.0-rc3 > true > > > com.google.errorprone > error_prone_check_api > ${error-prone.version} > provided > > > com.google.code.findbugs > jsr305 > > > > > com.google.errorprone > javac > 9-dev-r4023-3 > provided > > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-6105) Properly handle InterruptedException in HadoopInputFormatBase
[ https://issues.apache.org/jira/browse/FLINK-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307281#comment-16307281 ] Ted Yu edited comment on FLINK-6105 at 9/20/18 4:29 PM: In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException. was (Author: yuzhih...@gmail.com): In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . > Properly handle InterruptedException in HadoopInputFormatBase > - > > Key: FLINK-6105 > URL: https://issues.apache.org/jira/browse/FLINK-6105 > Project: Flink > Issue Type: Bug > Components: DataStream API >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > > When catching InterruptedException, we should throw InterruptedIOException > instead of IOException. > The following example is from HadoopInputFormatBase : > {code} > try { > splits = this.mapreduceInputFormat.getSplits(jobContext); > } catch (InterruptedException e) { > throw new IOException("Could not get Splits.", e); > } > {code} > There may be other places where IOE is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13
[ https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9924: -- Description: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment was: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment > Upgrade zookeeper to 3.4.13 > --- > > Key: FLINK-9924 > URL: https://issues.apache.org/jira/browse/FLINK-9924 > Project: Flink > Issue Type: Task >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > zookeeper 3.4.13 is being released. > ZOOKEEPER-2959 fixes data loss when observer is used > ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container > / cloud) environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0
[ https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433258#comment-16433258 ] Ted Yu edited comment on FLINK-7642 at 9/17/18 10:38 PM: - SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10 . was (Author: yuzhih...@gmail.com): SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10. > Upgrade maven surefire plugin to 2.21.0 > --- > > Key: FLINK-7642 > URL: https://issues.apache.org/jira/browse/FLINK-7642 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > Surefire 2.19 release introduced more useful test filters which would let us > run a subset of the test. > This issue is for upgrading maven surefire plugin to 2.21.0 which contains > SUREFIRE-1422 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9824) Support IPv6 literal
[ https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9824: -- Description: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. was: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. > Support IPv6 literal > > > Key: FLINK-9824 > URL: https://issues.apache.org/jira/browse/FLINK-9824 > Project: Flink > Issue Type: Bug > Components: Network >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Currently we use colon as separator when parsing host and port. > We should support the usage of IPv6 literals in parsing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13
[ https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9924: -- Description: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment was: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment > Upgrade zookeeper to 3.4.13 > --- > > Key: FLINK-9924 > URL: https://issues.apache.org/jira/browse/FLINK-9924 > Project: Flink > Issue Type: Task >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > zookeeper 3.4.13 is being released. > ZOOKEEPER-2959 fixes data loss when observer is used > ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container > / cloud) environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9824) Support IPv6 literal
[ https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9824: -- Description: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. was: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. > Support IPv6 literal > > > Key: FLINK-9824 > URL: https://issues.apache.org/jira/browse/FLINK-9824 > Project: Flink > Issue Type: Bug > Components: Network >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Currently we use colon as separator when parsing host and port. > We should support the usage of IPv6 literals in parsing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-7795) Utilize error-prone to discover common coding mistakes
[ https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-7795: -- Description: http://errorprone.info/ is a tool which detects common coding mistakes. We should incorporate into Flink build process. Here are the dependencies: {code} com.google.errorprone error_prone_annotation ${error-prone.version} provided com.google.auto.service auto-service 1.0-rc3 true com.google.errorprone error_prone_check_api ${error-prone.version} provided com.google.code.findbugs jsr305 com.google.errorprone javac 9-dev-r4023-3 provided {code} was: http://errorprone.info/ is a tool which detects common coding mistakes. We should incorporate into Flink build process. Here are the dependencies: {code} com.google.errorprone error_prone_annotation ${error-prone.version} provided com.google.auto.service auto-service 1.0-rc3 true com.google.errorprone error_prone_check_api ${error-prone.version} provided com.google.code.findbugs jsr305 com.google.errorprone javac 9-dev-r4023-3 provided {code} > Utilize error-prone to discover common coding mistakes > -- > > Key: FLINK-7795 > URL: https://issues.apache.org/jira/browse/FLINK-7795 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Priority: Major > > http://errorprone.info/ is a tool which detects common coding mistakes. > We should incorporate into Flink build process. > Here are the dependencies: > {code} > > com.google.errorprone > error_prone_annotation > ${error-prone.version} > provided > > > > com.google.auto.service > auto-service > 1.0-rc3 > true > > > com.google.errorprone > error_prone_check_api > ${error-prone.version} > provided > > > com.google.code.findbugs > jsr305 > > > > > com.google.errorprone > javac > 9-dev-r4023-3 > provided > > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9150) Prepare for Java 10
[ https://issues.apache.org/jira/browse/FLINK-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16473198#comment-16473198 ] Ted Yu edited comment on FLINK-9150 at 9/5/18 8:09 PM: --- Similar error is encountered when building against jdk 11 . was (Author: yuzhih...@gmail.com): Similar error is encountered when building against jdk 11. > Prepare for Java 10 > --- > > Key: FLINK-9150 > URL: https://issues.apache.org/jira/browse/FLINK-9150 > Project: Flink > Issue Type: Task > Components: Build System >Reporter: Ted Yu >Priority: Major > > Java 9 is not a LTS release. > When compiling with Java 10, I see the following compilation error: > {code} > [ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not > resolve dependencies for project > org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find > artifact jdk.tools:jdk.tools:jar:1.6 at specified path > /a/jdk-10/../lib/tools.jar -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9048) LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers sometimes fails
[ https://issues.apache.org/jira/browse/FLINK-9048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9048: -- Labels: (was: local-job-runner) > LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers > sometimes fails > - > > Key: FLINK-9048 > URL: https://issues.apache.org/jira/browse/FLINK-9048 > Project: Flink > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > As of commit e0bc37bef69f5376d03214578e9b95816add661b, I got the following : > {code} > testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase) > Time elapsed: 41.681 sec <<< FAILURE! > java.lang.AssertionError: Thread > Thread[ForkJoinPool.commonPool-worker-25,5,main] was started by the mini > cluster, but not shut down > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:174) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-8554) Upgrade AWS SDK
[ https://issues.apache.org/jira/browse/FLINK-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16507192#comment-16507192 ] Ted Yu edited comment on FLINK-8554 at 9/4/18 4:35 PM: --- Or use this JIRA for the next upgrade of AWS SDK . was (Author: yuzhih...@gmail.com): Or use this JIRA for the next upgrade of AWS SDK. > Upgrade AWS SDK > --- > > Key: FLINK-8554 > URL: https://issues.apache.org/jira/browse/FLINK-8554 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > AWS SDK 1.11.271 fixes a lot of bugs. > One of which would exhibit the following: > {code} > Caused by: java.lang.NullPointerException > at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729) > at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67) > at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-10228) Add metrics for netty direct memory consumption
[ https://issues.apache.org/jira/browse/FLINK-10228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-10228: --- Component/s: Metrics > Add metrics for netty direct memory consumption > --- > > Key: FLINK-10228 > URL: https://issues.apache.org/jira/browse/FLINK-10228 > Project: Flink > Issue Type: Improvement > Components: Metrics >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > netty direct memory usage can be exposed via metrics so that operator can > keep track of memory consumption. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9824) Support IPv6 literal
[ https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9824: -- Description: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. was: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing . > Support IPv6 literal > > > Key: FLINK-9824 > URL: https://issues.apache.org/jira/browse/FLINK-9824 > Project: Flink > Issue Type: Bug > Components: Network >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Currently we use colon as separator when parsing host and port. > We should support the usage of IPv6 literals in parsing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions
[ https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559975#comment-16559975 ] Ted Yu edited comment on FLINK-9735 at 9/4/18 3:27 AM: --- Thanks, Vino . was (Author: yuzhih...@gmail.com): Thanks, Vino. > Potential resource leak in RocksDBStateBackend#getDbOptions > --- > > Key: FLINK-9735 > URL: https://issues.apache.org/jira/browse/FLINK-9735 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Here is related code: > {code} > if (optionsFactory != null) { > opt = optionsFactory.createDBOptions(opt); > } > {code} > opt, an DBOptions instance, should be closed before being rewritten. > getColumnOptions has similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-10125) Unclosed ByteArrayDataOutputView in RocksDBMapState
[ https://issues.apache.org/jira/browse/FLINK-10125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16602577#comment-16602577 ] Ted Yu commented on FLINK-10125: bq. we keep it under try/with/resources +1 > Unclosed ByteArrayDataOutputView in RocksDBMapState > --- > > Key: FLINK-10125 > URL: https://issues.apache.org/jira/browse/FLINK-10125 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > {code} > ByteArrayDataOutputView dov = new ByteArrayDataOutputView(1); > {code} > dov is used in a try block but it is not closed in case of Exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9849) Create hbase connector for hbase version to 2.0.2
[ https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9849: -- Description: Currently hbase 1.4.3 is used for hbase connector. We should create connector for hbase 2.0.2 which would be released. Since there are API changes for the 2.0.2 release, a new hbase connector is desirable. was: Currently hbase 1.4.3 is used for hbase connector. We should create connector for hbase 2.0.1 which was recently released. Since there are API changes for the 2.0.1 release, a new hbase connector is desirable. > Create hbase connector for hbase version to 2.0.2 > - > > Key: FLINK-9849 > URL: https://issues.apache.org/jira/browse/FLINK-9849 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > Labels: pull-request-available > Attachments: hbase-2.1.0.dep > > > Currently hbase 1.4.3 is used for hbase connector. > We should create connector for hbase 2.0.2 which would be released. > Since there are API changes for the 2.0.2 release, a new hbase connector is > desirable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9849) Create hbase connector for hbase version to 2.0.2
[ https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9849: -- Summary: Create hbase connector for hbase version to 2.0.2 (was: Create hbase connector for hbase version to 2.0.1) > Create hbase connector for hbase version to 2.0.2 > - > > Key: FLINK-9849 > URL: https://issues.apache.org/jira/browse/FLINK-9849 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > Labels: pull-request-available > Attachments: hbase-2.1.0.dep > > > Currently hbase 1.4.3 is used for hbase connector. > We should create connector for hbase 2.0.1 which was recently released. > Since there are API changes for the 2.0.1 release, a new hbase connector is > desirable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7795) Utilize error-prone to discover common coding mistakes
[ https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345955#comment-16345955 ] Ted Yu edited comment on FLINK-7795 at 9/1/18 3:12 PM: --- error-prone has JDK 8 dependency. was (Author: yuzhih...@gmail.com): error-prone has JDK 8 dependency . > Utilize error-prone to discover common coding mistakes > -- > > Key: FLINK-7795 > URL: https://issues.apache.org/jira/browse/FLINK-7795 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Priority: Major > > http://errorprone.info/ is a tool which detects common coding mistakes. > We should incorporate into Flink build process. > Here are the dependencies: > {code} > > com.google.errorprone > error_prone_annotation > ${error-prone.version} > provided > > > > com.google.auto.service > auto-service > 1.0-rc3 > true > > > com.google.errorprone > error_prone_check_api > ${error-prone.version} > provided > > > com.google.code.findbugs > jsr305 > > > > > com.google.errorprone > javac > 9-dev-r4023-3 > provided > > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-6105) Properly handle InterruptedException in HadoopInputFormatBase
[ https://issues.apache.org/jira/browse/FLINK-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307281#comment-16307281 ] Ted Yu edited comment on FLINK-6105 at 8/31/18 9:45 AM: In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . was (Author: yuzhih...@gmail.com): In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . > Properly handle InterruptedException in HadoopInputFormatBase > - > > Key: FLINK-6105 > URL: https://issues.apache.org/jira/browse/FLINK-6105 > Project: Flink > Issue Type: Bug > Components: DataStream API >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > > When catching InterruptedException, we should throw InterruptedIOException > instead of IOException. > The following example is from HadoopInputFormatBase : > {code} > try { > splits = this.mapreduceInputFormat.getSplits(jobContext); > } catch (InterruptedException e) { > throw new IOException("Could not get Splits.", e); > } > {code} > There may be other places where IOE is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9340) ScheduleOrUpdateConsumersTest may fail with Address already in use
[ https://issues.apache.org/jira/browse/FLINK-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16507191#comment-16507191 ] Ted Yu edited comment on FLINK-9340 at 8/31/18 9:45 AM: I wonder if it is easier to reproduce the error when running LegacyScheduleOrUpdateConsumersTest concurrently with this test. was (Author: yuzhih...@gmail.com): I wonder if it is easier to reproduce the error when running LegacyScheduleOrUpdateConsumersTest concurrently with this test . > ScheduleOrUpdateConsumersTest may fail with Address already in use > -- > > Key: FLINK-9340 > URL: https://issues.apache.org/jira/browse/FLINK-9340 > Project: Flink > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > Labels: runtime > > When ScheduleOrUpdateConsumersTest is run in the test suite, I saw: > {code} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.034 sec <<< > FAILURE! - in > org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest > org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest > Time elapsed: 8.034 sec <<< ERROR! > java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at > org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1081) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198) > at > org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > at > org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > {code} > Seems there was address / port conflict. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9825) Upgrade checkstyle version to 8.6
[ https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575017#comment-16575017 ] Ted Yu edited comment on FLINK-9825 at 8/31/18 9:44 AM: Thanks, Dalong. was (Author: yuzhih...@gmail.com): Thanks, Dalong . > Upgrade checkstyle version to 8.6 > - > > Key: FLINK-9825 > URL: https://issues.apache.org/jira/browse/FLINK-9825 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: dalongliu >Priority: Minor > > We should upgrade checkstyle version to 8.6+ so that we can use the "match > violation message to this regex" feature for suppression. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9363) Bump up the Jackson version
[ https://issues.apache.org/jira/browse/FLINK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9363: -- Description: CVE's for Jackson : CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 was: CVE's for Jackson: CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 > Bump up the Jackson version > --- > > Key: FLINK-9363 > URL: https://issues.apache.org/jira/browse/FLINK-9363 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > Labels: security > > CVE's for Jackson : > CVE-2017-17485 > CVE-2018-5968 > CVE-2018-7489 > We can upgrade to 2.9.5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13
[ https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9924: -- Description: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment was: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment > Upgrade zookeeper to 3.4.13 > --- > > Key: FLINK-9924 > URL: https://issues.apache.org/jira/browse/FLINK-9924 > Project: Flink > Issue Type: Task >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > zookeeper 3.4.13 is being released. > ZOOKEEPER-2959 fixes data loss when observer is used > ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container > / cloud) environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9150) Prepare for Java 10
[ https://issues.apache.org/jira/browse/FLINK-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16473198#comment-16473198 ] Ted Yu edited comment on FLINK-9150 at 8/28/18 3:29 AM: Similar error is encountered when building against jdk 11. was (Author: yuzhih...@gmail.com): Similar error is encountered when building against jdk 11 . > Prepare for Java 10 > --- > > Key: FLINK-9150 > URL: https://issues.apache.org/jira/browse/FLINK-9150 > Project: Flink > Issue Type: Task > Components: Build System >Reporter: Ted Yu >Priority: Major > > Java 9 is not a LTS release. > When compiling with Java 10, I see the following compilation error: > {code} > [ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not > resolve dependencies for project > org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find > artifact jdk.tools:jdk.tools:jar:1.6 at specified path > /a/jdk-10/../lib/tools.jar -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10228) Add metrics for netty direct memory consumption
Ted Yu created FLINK-10228: -- Summary: Add metrics for netty direct memory consumption Key: FLINK-10228 URL: https://issues.apache.org/jira/browse/FLINK-10228 Project: Flink Issue Type: Improvement Reporter: Ted Yu netty direct memory usage can be exposed via metrics so that operator can keep track of memory consumption. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-6105) Properly handle InterruptedException in HadoopInputFormatBase
[ https://issues.apache.org/jira/browse/FLINK-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307281#comment-16307281 ] Ted Yu edited comment on FLINK-6105 at 8/22/18 12:02 AM: - In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . was (Author: yuzhih...@gmail.com): In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . > Properly handle InterruptedException in HadoopInputFormatBase > - > > Key: FLINK-6105 > URL: https://issues.apache.org/jira/browse/FLINK-6105 > Project: Flink > Issue Type: Bug > Components: DataStream API >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > > When catching InterruptedException, we should throw InterruptedIOException > instead of IOException. > The following example is from HadoopInputFormatBase : > {code} > try { > splits = this.mapreduceInputFormat.getSplits(jobContext); > } catch (InterruptedException e) { > throw new IOException("Could not get Splits.", e); > } > {code} > There may be other places where IOE is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9849) Create hbase connector for hbase version to 2.0.1
[ https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9849: -- Description: Currently hbase 1.4.3 is used for hbase connector. We should create connector for hbase 2.0.1 which was recently released. Since there are API changes for the 2.0.1 release, a new hbase connector is desirable. was: Currently hbase 1.4.3 is used for hbase connector. We should create connector for hbase 2.0.1 which was recently released. > Create hbase connector for hbase version to 2.0.1 > - > > Key: FLINK-9849 > URL: https://issues.apache.org/jira/browse/FLINK-9849 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > Labels: pull-request-available > Attachments: hbase-2.1.0.dep > > > Currently hbase 1.4.3 is used for hbase connector. > We should create connector for hbase 2.0.1 which was recently released. > Since there are API changes for the 2.0.1 release, a new hbase connector is > desirable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13
[ https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9924: -- Description: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment was: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment > Upgrade zookeeper to 3.4.13 > --- > > Key: FLINK-9924 > URL: https://issues.apache.org/jira/browse/FLINK-9924 > Project: Flink > Issue Type: Task >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > zookeeper 3.4.13 is being released. > ZOOKEEPER-2959 fixes data loss when observer is used > ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container > / cloud) environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-4534) Lack of synchronization in BucketingSink#restoreState()
[ https://issues.apache.org/jira/browse/FLINK-4534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583158#comment-16583158 ] Ted Yu commented on FLINK-4534: --- Sounds good. > Lack of synchronization in BucketingSink#restoreState() > --- > > Key: FLINK-4534 > URL: https://issues.apache.org/jira/browse/FLINK-4534 > Project: Flink > Issue Type: Bug > Components: Streaming Connectors >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > > Iteration over state.bucketStates is protected by synchronization in other > methods, except for the following in restoreState(): > {code} > for (BucketState bucketState : state.bucketStates.values()) { > {code} > and following in close(): > {code} > for (Map.Entry> entry : > state.bucketStates.entrySet()) { > closeCurrentPartFile(entry.getValue()); > {code} > w.r.t. bucketState.pendingFilesPerCheckpoint , there is similar issue > starting line 752: > {code} > Set pastCheckpointIds = > bucketState.pendingFilesPerCheckpoint.keySet(); > LOG.debug("Moving pending files to final location on restore."); > for (Long pastCheckpointId : pastCheckpointIds) { > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9150) Prepare for Java 10
[ https://issues.apache.org/jira/browse/FLINK-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16473198#comment-16473198 ] Ted Yu edited comment on FLINK-9150 at 8/16/18 10:57 PM: - Similar error is encountered when building against jdk 11 . was (Author: yuzhih...@gmail.com): Similar error is encountered when building against jdk 11. > Prepare for Java 10 > --- > > Key: FLINK-9150 > URL: https://issues.apache.org/jira/browse/FLINK-9150 > Project: Flink > Issue Type: Task > Components: Build System >Reporter: Ted Yu >Priority: Major > > Java 9 is not a LTS release. > When compiling with Java 10, I see the following compilation error: > {code} > [ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not > resolve dependencies for project > org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find > artifact jdk.tools:jdk.tools:jar:1.6 at specified path > /a/jdk-10/../lib/tools.jar -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9363) Bump up the Jackson version
[ https://issues.apache.org/jira/browse/FLINK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9363: -- Description: CVE's for Jackson: CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 was: CVE's for Jackson: CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 > Bump up the Jackson version > --- > > Key: FLINK-9363 > URL: https://issues.apache.org/jira/browse/FLINK-9363 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > Labels: security > > CVE's for Jackson: > CVE-2017-17485 > CVE-2018-5968 > CVE-2018-7489 > We can upgrade to 2.9.5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9825) Upgrade checkstyle version to 8.6
[ https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575017#comment-16575017 ] Ted Yu edited comment on FLINK-9825 at 8/16/18 10:56 PM: - Thanks, Dalong . was (Author: yuzhih...@gmail.com): Thanks, Dalong. > Upgrade checkstyle version to 8.6 > - > > Key: FLINK-9825 > URL: https://issues.apache.org/jira/browse/FLINK-9825 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: dalongliu >Priority: Minor > > We should upgrade checkstyle version to 8.6+ so that we can use the "match > violation message to this regex" feature for suppression. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-8037) Missing cast in integer arithmetic in TransactionalIdsGenerator#generateIdsToAbort
[ https://issues.apache.org/jira/browse/FLINK-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16427664#comment-16427664 ] Ted Yu edited comment on FLINK-8037 at 8/16/18 7:35 AM: Please rebase PR. was (Author: yuzhih...@gmail.com): Please rebase PR . > Missing cast in integer arithmetic in > TransactionalIdsGenerator#generateIdsToAbort > -- > > Key: FLINK-8037 > URL: https://issues.apache.org/jira/browse/FLINK-8037 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: Greg Hogan >Priority: Minor > Labels: kafka, kafka-connect > > {code} > public Set generateIdsToAbort() { > Set idsToAbort = new HashSet<>(); > for (int i = 0; i < safeScaleDownFactor; i++) { > idsToAbort.addAll(generateIdsToUse(i * poolSize * > totalNumberOfSubtasks)); > {code} > The operands are integers where generateIdsToUse() expects long parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-8554) Upgrade AWS SDK
[ https://issues.apache.org/jira/browse/FLINK-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16507192#comment-16507192 ] Ted Yu edited comment on FLINK-8554 at 8/16/18 7:35 AM: Or use this JIRA for the next upgrade of AWS SDK. was (Author: yuzhih...@gmail.com): Or use this JIRA for the next upgrade of AWS SDK . > Upgrade AWS SDK > --- > > Key: FLINK-8554 > URL: https://issues.apache.org/jira/browse/FLINK-8554 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > AWS SDK 1.11.271 fixes a lot of bugs. > One of which would exhibit the following: > {code} > Caused by: java.lang.NullPointerException > at com.amazonaws.metrics.AwsSdkMetrics.getRegion(AwsSdkMetrics.java:729) > at com.amazonaws.metrics.MetricAdmin.getRegion(MetricAdmin.java:67) > at sun.reflect.GeneratedMethodAccessor132.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0
[ https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433258#comment-16433258 ] Ted Yu edited comment on FLINK-7642 at 8/15/18 10:16 PM: - SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10. was (Author: yuzhih...@gmail.com): SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10 . > Upgrade maven surefire plugin to 2.21.0 > --- > > Key: FLINK-7642 > URL: https://issues.apache.org/jira/browse/FLINK-7642 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > Surefire 2.19 release introduced more useful test filters which would let us > run a subset of the test. > This issue is for upgrading maven surefire plugin to 2.21.0 which contains > SUREFIRE-1422 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-10125) Unclosed ByteArrayDataOutputView in RocksDBMapState
Ted Yu created FLINK-10125: -- Summary: Unclosed ByteArrayDataOutputView in RocksDBMapState Key: FLINK-10125 URL: https://issues.apache.org/jira/browse/FLINK-10125 Project: Flink Issue Type: Bug Reporter: Ted Yu {code} ByteArrayDataOutputView dov = new ByteArrayDataOutputView(1); {code} dov is used in a try block but it is not closed in case of Exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-9825) Upgrade checkstyle version to 8.6
[ https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575017#comment-16575017 ] Ted Yu commented on FLINK-9825: --- Thanks, Dalong. > Upgrade checkstyle version to 8.6 > - > > Key: FLINK-9825 > URL: https://issues.apache.org/jira/browse/FLINK-9825 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: dalongliu >Priority: Minor > > We should upgrade checkstyle version to 8.6+ so that we can use the "match > violation message to this regex" feature for suppression. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9849) Create hbase connector for hbase version to 2.0.1
[ https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9849: -- Description: Currently hbase 1.4.3 is used for hbase connector. We should create connector for hbase 2.0.1 which was recently released. was: Currently hbase 1.4.3 is used for hbase connector. We should upgrade to 2.0.1 which was recently released. > Create hbase connector for hbase version to 2.0.1 > - > > Key: FLINK-9849 > URL: https://issues.apache.org/jira/browse/FLINK-9849 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > Labels: pull-request-available > Attachments: hbase-2.1.0.dep > > > Currently hbase 1.4.3 is used for hbase connector. > We should create connector for hbase 2.0.1 which was recently released. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13
[ https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9924: -- Description: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment was: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment > Upgrade zookeeper to 3.4.13 > --- > > Key: FLINK-9924 > URL: https://issues.apache.org/jira/browse/FLINK-9924 > Project: Flink > Issue Type: Task >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > zookeeper 3.4.13 is being released. > ZOOKEEPER-2959 fixes data loss when observer is used > ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container > / cloud) > environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9824) Support IPv6 literal
[ https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9824: -- Description: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing . was: Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. > Support IPv6 literal > > > Key: FLINK-9824 > URL: https://issues.apache.org/jira/browse/FLINK-9824 > Project: Flink > Issue Type: Bug > Components: Network >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Currently we use colon as separator when parsing host and port. > We should support the usage of IPv6 literals in parsing . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-6105) Properly handle InterruptedException in HadoopInputFormatBase
[ https://issues.apache.org/jira/browse/FLINK-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307281#comment-16307281 ] Ted Yu edited comment on FLINK-6105 at 8/7/18 10:33 PM: In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . was (Author: yuzhih...@gmail.com): In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . > Properly handle InterruptedException in HadoopInputFormatBase > - > > Key: FLINK-6105 > URL: https://issues.apache.org/jira/browse/FLINK-6105 > Project: Flink > Issue Type: Bug > Components: DataStream API >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > > When catching InterruptedException, we should throw InterruptedIOException > instead of IOException. > The following example is from HadoopInputFormatBase : > {code} > try { > splits = this.mapreduceInputFormat.getSplits(jobContext); > } catch (InterruptedException e) { > throw new IOException("Could not get Splits.", e); > } > {code} > There may be other places where IOE is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7795) Utilize error-prone to discover common coding mistakes
[ https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345955#comment-16345955 ] Ted Yu edited comment on FLINK-7795 at 8/6/18 4:54 AM: --- error-prone has JDK 8 dependency . was (Author: yuzhih...@gmail.com): error-prone has JDK 8 dependency. > Utilize error-prone to discover common coding mistakes > -- > > Key: FLINK-7795 > URL: https://issues.apache.org/jira/browse/FLINK-7795 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Priority: Major > > http://errorprone.info/ is a tool which detects common coding mistakes. > We should incorporate into Flink build process. > Here are the dependencies: > {code} > > com.google.errorprone > error_prone_annotation > ${error-prone.version} > provided > > > > com.google.auto.service > auto-service > 1.0-rc3 > true > > > com.google.errorprone > error_prone_check_api > ${error-prone.version} > provided > > > com.google.code.findbugs > jsr305 > > > > > com.google.errorprone > javac > 9-dev-r4023-3 > provided > > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7642) Upgrade maven surefire plugin to 2.21.0
[ https://issues.apache.org/jira/browse/FLINK-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433258#comment-16433258 ] Ted Yu edited comment on FLINK-7642 at 8/2/18 8:28 PM: --- SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10 . was (Author: yuzhih...@gmail.com): SUREFIRE-1439 is in 2.21.0 which is needed for compiling with Java 10 > Upgrade maven surefire plugin to 2.21.0 > --- > > Key: FLINK-7642 > URL: https://issues.apache.org/jira/browse/FLINK-7642 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > Surefire 2.19 release introduced more useful test filters which would let us > run a subset of the test. > This issue is for upgrading maven surefire plugin to 2.21.0 which contains > SUREFIRE-1422 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9924) Upgrade zookeeper to 3.4.13
[ https://issues.apache.org/jira/browse/FLINK-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9924: -- Description: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment was: zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment > Upgrade zookeeper to 3.4.13 > --- > > Key: FLINK-9924 > URL: https://issues.apache.org/jira/browse/FLINK-9924 > Project: Flink > Issue Type: Task >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > > zookeeper 3.4.13 is being released. > ZOOKEEPER-2959 fixes data loss when observer is used > ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container > / cloud) > environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9363) Bump up the Jackson version
[ https://issues.apache.org/jira/browse/FLINK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9363: -- Description: CVE's for Jackson: CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 was: CVE's for Jackson: CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 > Bump up the Jackson version > --- > > Key: FLINK-9363 > URL: https://issues.apache.org/jira/browse/FLINK-9363 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > Labels: security > > CVE's for Jackson: > CVE-2017-17485 > CVE-2018-5968 > CVE-2018-7489 > We can upgrade to 2.9.5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9340) ScheduleOrUpdateConsumersTest may fail with Address already in use
[ https://issues.apache.org/jira/browse/FLINK-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16507191#comment-16507191 ] Ted Yu edited comment on FLINK-9340 at 8/1/18 5:07 PM: --- I wonder if it is easier to reproduce the error when running LegacyScheduleOrUpdateConsumersTest concurrently with this test . was (Author: yuzhih...@gmail.com): I wonder if it is easier to reproduce the error when running LegacyScheduleOrUpdateConsumersTest concurrently with this test. > ScheduleOrUpdateConsumersTest may fail with Address already in use > -- > > Key: FLINK-9340 > URL: https://issues.apache.org/jira/browse/FLINK-9340 > Project: Flink > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > Labels: runtime > > When ScheduleOrUpdateConsumersTest is run in the test suite, I saw: > {code} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.034 sec <<< > FAILURE! - in > org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest > org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest > Time elapsed: 8.034 sec <<< ERROR! > java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at > org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1081) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198) > at > org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > at > org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > {code} > Seems there was address / port conflict. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-8037) Missing cast in integer arithmetic in TransactionalIdsGenerator#generateIdsToAbort
[ https://issues.apache.org/jira/browse/FLINK-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-8037: -- Labels: kafka kafka-connect (was: kafka-connect) > Missing cast in integer arithmetic in > TransactionalIdsGenerator#generateIdsToAbort > -- > > Key: FLINK-8037 > URL: https://issues.apache.org/jira/browse/FLINK-8037 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: Greg Hogan >Priority: Minor > Labels: kafka, kafka-connect > > {code} > public Set generateIdsToAbort() { > Set idsToAbort = new HashSet<>(); > for (int i = 0; i < safeScaleDownFactor; i++) { > idsToAbort.addAll(generateIdsToUse(i * poolSize * > totalNumberOfSubtasks)); > {code} > The operands are integers where generateIdsToUse() expects long parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9150) Prepare for Java 10
[ https://issues.apache.org/jira/browse/FLINK-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16473198#comment-16473198 ] Ted Yu edited comment on FLINK-9150 at 7/31/18 1:49 AM: Similar error is encountered when building against jdk 11. was (Author: yuzhih...@gmail.com): Similar error is encountered when building against jdk 11 . > Prepare for Java 10 > --- > > Key: FLINK-9150 > URL: https://issues.apache.org/jira/browse/FLINK-9150 > Project: Flink > Issue Type: Task > Components: Build System >Reporter: Ted Yu >Priority: Major > > Java 9 is not a LTS release. > When compiling with Java 10, I see the following compilation error: > {code} > [ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not > resolve dependencies for project > org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find > artifact jdk.tools:jdk.tools:jar:1.6 at specified path > /a/jdk-10/../lib/tools.jar -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7588) Document RocksDB tuning for spinning disks
[ https://issues.apache.org/jira/browse/FLINK-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258309#comment-16258309 ] Ted Yu edited comment on FLINK-7588 at 7/29/18 1:33 AM: bq. Be careful about whether you have enough memory to keep all bloom filters Other than the above being tricky, the other guidelines are actionable. was (Author: yuzhih...@gmail.com): bq. Be careful about whether you have enough memory to keep all bloom filters Other than the above being tricky, the other guidelines are actionable . > Document RocksDB tuning for spinning disks > -- > > Key: FLINK-7588 > URL: https://issues.apache.org/jira/browse/FLINK-7588 > Project: Flink > Issue Type: Improvement > Components: Documentation >Reporter: Ted Yu >Priority: Major > Labels: performance > > In docs/ops/state/large_state_tuning.md , it was mentioned that: > bq. the default configuration is tailored towards SSDs and performs > suboptimal on spinning disks > We should add recommendation targeting spinning disks: > https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide#difference-of-spinning-disk -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions
[ https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559975#comment-16559975 ] Ted Yu commented on FLINK-9735: --- Thanks, Vino. > Potential resource leak in RocksDBStateBackend#getDbOptions > --- > > Key: FLINK-9735 > URL: https://issues.apache.org/jira/browse/FLINK-9735 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Here is related code: > {code} > if (optionsFactory != null) { > opt = optionsFactory.createDBOptions(opt); > } > {code} > opt, an DBOptions instance, should be closed before being rewritten. > getColumnOptions has similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7795) Utilize error-prone to discover common coding mistakes
[ https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345955#comment-16345955 ] Ted Yu edited comment on FLINK-7795 at 7/27/18 4:34 PM: error-prone has JDK 8 dependency. was (Author: yuzhih...@gmail.com): error-prone has JDK 8 dependency . > Utilize error-prone to discover common coding mistakes > -- > > Key: FLINK-7795 > URL: https://issues.apache.org/jira/browse/FLINK-7795 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Priority: Major > > http://errorprone.info/ is a tool which detects common coding mistakes. > We should incorporate into Flink build process. > Here are the dependencies: > {code} > > com.google.errorprone > error_prone_annotation > ${error-prone.version} > provided > > > > com.google.auto.service > auto-service > 1.0-rc3 > true > > > com.google.errorprone > error_prone_check_api > ${error-prone.version} > provided > > > com.google.code.findbugs > jsr305 > > > > > com.google.errorprone > javac > 9-dev-r4023-3 > provided > > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9824) Support IPv6 literal
[ https://issues.apache.org/jira/browse/FLINK-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9824: -- Component/s: Network > Support IPv6 literal > > > Key: FLINK-9824 > URL: https://issues.apache.org/jira/browse/FLINK-9824 > Project: Flink > Issue Type: Bug > Components: Network >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Currently we use colon as separator when parsing host and port. > We should support the usage of IPv6 literals in parsing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9849) Create hbase connector for hbase version to 2.0.1
[ https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9849: -- Summary: Create hbase connector for hbase version to 2.0.1 (was: Upgrade hbase version to 2.0.1 for hbase connector) > Create hbase connector for hbase version to 2.0.1 > - > > Key: FLINK-9849 > URL: https://issues.apache.org/jira/browse/FLINK-9849 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > Labels: pull-request-available > Attachments: hbase-2.1.0.dep > > > Currently hbase 1.4.3 is used for hbase connector. > We should upgrade to 2.0.1 which was recently released. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9825) Upgrade checkstyle version to 8.6
[ https://issues.apache.org/jira/browse/FLINK-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9825: -- Component/s: Build System > Upgrade checkstyle version to 8.6 > - > > Key: FLINK-9825 > URL: https://issues.apache.org/jira/browse/FLINK-9825 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Minor > > We should upgrade checkstyle version to 8.6+ so that we can use the "match > violation message to this regex" feature for suppression. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7525) Add config option to disable Cancel functionality on UI
[ https://issues.apache.org/jira/browse/FLINK-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16441630#comment-16441630 ] Ted Yu edited comment on FLINK-7525 at 7/26/18 1:21 AM: FLINK-4319 has been resolved. was (Author: yuzhih...@gmail.com): Hopefully FLIP-6 would be released soon . > Add config option to disable Cancel functionality on UI > --- > > Key: FLINK-7525 > URL: https://issues.apache.org/jira/browse/FLINK-7525 > Project: Flink > Issue Type: Improvement > Components: Web Client, Webfrontend >Reporter: Ted Yu >Priority: Major > > In this email thread > http://search-hadoop.com/m/Flink/VkLeQlf0QOnc7YA?subj=Security+Control+of+running+Flink+Jobs+on+Flink+UI > , Raja was asking for a way to control how users cancel Job(s). > Robert proposed adding a config option which disables the Cancel > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-6105) Properly handle InterruptedException in HadoopInputFormatBase
[ https://issues.apache.org/jira/browse/FLINK-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307281#comment-16307281 ] Ted Yu edited comment on FLINK-6105 at 7/26/18 1:20 AM: In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . was (Author: yuzhih...@gmail.com): In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . > Properly handle InterruptedException in HadoopInputFormatBase > - > > Key: FLINK-6105 > URL: https://issues.apache.org/jira/browse/FLINK-6105 > Project: Flink > Issue Type: Bug > Components: DataStream API >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > > When catching InterruptedException, we should throw InterruptedIOException > instead of IOException. > The following example is from HadoopInputFormatBase : > {code} > try { > splits = this.mapreduceInputFormat.getSplits(jobContext); > } catch (InterruptedException e) { > throw new IOException("Could not get Splits.", e); > } > {code} > There may be other places where IOE is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-9924) Upgrade zookeeper to 3.4.13
Ted Yu created FLINK-9924: - Summary: Upgrade zookeeper to 3.4.13 Key: FLINK-9924 URL: https://issues.apache.org/jira/browse/FLINK-9924 Project: Flink Issue Type: Task Reporter: Ted Yu zookeeper 3.4.13 is being released. ZOOKEEPER-2959 fixes data loss when observer is used ZOOKEEPER-2184 allows ZooKeeper Java clients to work in dynamic IP (container / cloud) environment -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9048) LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers sometimes fails
[ https://issues.apache.org/jira/browse/FLINK-9048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9048: -- Description: As of commit e0bc37bef69f5376d03214578e9b95816add661b, I got the following : {code} testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase) Time elapsed: 41.681 sec <<< FAILURE! java.lang.AssertionError: Thread Thread[ForkJoinPool.commonPool-worker-25,5,main] was started by the mini cluster, but not shut down at org.junit.Assert.fail(Assert.java:88) at org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:174) {code} was: As of commit e0bc37bef69f5376d03214578e9b95816add661b, I got the following : {code} testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase) Time elapsed: 41.681 sec <<< FAILURE! java.lang.AssertionError: Thread Thread[ForkJoinPool.commonPool-worker-25,5,main] was started by the mini cluster, but not shut down at org.junit.Assert.fail(Assert.java:88) at org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:174) {code} > LocalFlinkMiniClusterITCase#testLocalFlinkMiniClusterWithMultipleTaskManagers > sometimes fails > - > > Key: FLINK-9048 > URL: https://issues.apache.org/jira/browse/FLINK-9048 > Project: Flink > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > Labels: local-job-runner > > As of commit e0bc37bef69f5376d03214578e9b95816add661b, I got the following : > {code} > testLocalFlinkMiniClusterWithMultipleTaskManagers(org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase) > Time elapsed: 41.681 sec <<< FAILURE! > java.lang.AssertionError: Thread > Thread[ForkJoinPool.commonPool-worker-25,5,main] was started by the mini > cluster, but not shut down > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.flink.test.runtime.minicluster.LocalFlinkMiniClusterITCase.testLocalFlinkMiniClusterWithMultipleTaskManagers(LocalFlinkMiniClusterITCase.java:174) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-7795) Utilize error-prone to discover common coding mistakes
[ https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345955#comment-16345955 ] Ted Yu edited comment on FLINK-7795 at 7/21/18 9:26 PM: error-prone has JDK 8 dependency . was (Author: yuzhih...@gmail.com): error-prone has JDK 8 dependency. > Utilize error-prone to discover common coding mistakes > -- > > Key: FLINK-7795 > URL: https://issues.apache.org/jira/browse/FLINK-7795 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Priority: Major > > http://errorprone.info/ is a tool which detects common coding mistakes. > We should incorporate into Flink build process. > Here are the dependencies: > {code} > > com.google.errorprone > error_prone_annotation > ${error-prone.version} > provided > > > > com.google.auto.service > auto-service > 1.0-rc3 > true > > > com.google.errorprone > error_prone_check_api > ${error-prone.version} > provided > > > com.google.code.findbugs > jsr305 > > > > > com.google.errorprone > javac > 9-dev-r4023-3 > provided > > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9150) Prepare for Java 10
[ https://issues.apache.org/jira/browse/FLINK-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16473198#comment-16473198 ] Ted Yu edited comment on FLINK-9150 at 7/21/18 9:25 PM: Similar error is encountered when building against jdk 11 . was (Author: yuzhih...@gmail.com): Similar error is encountered when building against jdk 11. > Prepare for Java 10 > --- > > Key: FLINK-9150 > URL: https://issues.apache.org/jira/browse/FLINK-9150 > Project: Flink > Issue Type: Task > Components: Build System >Reporter: Ted Yu >Priority: Major > > Java 9 is not a LTS release. > When compiling with Java 10, I see the following compilation error: > {code} > [ERROR] Failed to execute goal on project flink-shaded-hadoop2: Could not > resolve dependencies for project > org.apache.flink:flink-shaded-hadoop2:jar:1.6-SNAPSHOT: Could not find > artifact jdk.tools:jdk.tools:jar:1.6 at specified path > /a/jdk-10/../lib/tools.jar -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9363) Bump up the Jackson version
[ https://issues.apache.org/jira/browse/FLINK-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9363: -- Description: CVE's for Jackson: CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 was: CVE's for Jackson: CVE-2017-17485 CVE-2018-5968 CVE-2018-7489 We can upgrade to 2.9.5 > Bump up the Jackson version > --- > > Key: FLINK-9363 > URL: https://issues.apache.org/jira/browse/FLINK-9363 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: vinoyang >Priority: Major > Labels: security > > CVE's for Jackson: > CVE-2017-17485 > CVE-2018-5968 > CVE-2018-7489 > We can upgrade to 2.9.5 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-9849) Upgrade hbase version to 2.0.1 for hbase connector
[ https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550141#comment-16550141 ] Ted Yu commented on FLINK-9849: --- I generated the dependency tree where I don't see SNAPSHOT . Here is some occurrence of glassfish dependency: {code} [INFO] +- org.glassfish:javax.el:jar:3.0.1-b08:compile [INFO] | | \- org.glassfish:javax.el:jar:3.0.1-b08:compile [INFO] | | \- org.glassfish:javax.el:jar:3.0.1-b08:compile [INFO] | | \- org.glassfish:javax.el:jar:3.0.1-b08:compile {code} > Upgrade hbase version to 2.0.1 for hbase connector > -- > > Key: FLINK-9849 > URL: https://issues.apache.org/jira/browse/FLINK-9849 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > Labels: pull-request-available > Attachments: hbase-2.1.0.dep > > > Currently hbase 1.4.3 is used for hbase connector. > We should upgrade to 2.0.1 which was recently released. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9849) Upgrade hbase version to 2.0.1 for hbase connector
[ https://issues.apache.org/jira/browse/FLINK-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9849: -- Attachment: hbase-2.1.0.dep > Upgrade hbase version to 2.0.1 for hbase connector > -- > > Key: FLINK-9849 > URL: https://issues.apache.org/jira/browse/FLINK-9849 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > Labels: pull-request-available > Attachments: hbase-2.1.0.dep > > > Currently hbase 1.4.3 is used for hbase connector. > We should upgrade to 2.0.1 which was recently released. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9236) Use Apache Parent POM 19
[ https://issues.apache.org/jira/browse/FLINK-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9236: -- Description: Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out. This will also fix Javadoc generation with JDK 10+ was: Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out. This will also fix Javadoc generation with JDK 10+ > Use Apache Parent POM 19 > > > Key: FLINK-9236 > URL: https://issues.apache.org/jira/browse/FLINK-9236 > Project: Flink > Issue Type: Improvement > Components: Build System >Reporter: Ted Yu >Assignee: Stephen Jason >Priority: Major > > Flink is still using Apache Parent POM 18. Apache Parent POM 19 is out. > This will also fix Javadoc generation with JDK 10+ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions
[ https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16540754#comment-16540754 ] Ted Yu edited comment on FLINK-9735 at 7/19/18 12:37 AM: - Short term, we should fix the leaked DBOptions instance by releasing it. was (Author: yuzhih...@gmail.com): Short term, we should fix the leaked DBOptions instance. > Potential resource leak in RocksDBStateBackend#getDbOptions > --- > > Key: FLINK-9735 > URL: https://issues.apache.org/jira/browse/FLINK-9735 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Here is related code: > {code} > if (optionsFactory != null) { > opt = optionsFactory.createDBOptions(opt); > } > {code} > opt, an DBOptions instance, should be closed before being rewritten. > getColumnOptions has similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-9880) Incorrect argument order calling BucketerContext#update
Ted Yu created FLINK-9880: - Summary: Incorrect argument order calling BucketerContext#update Key: FLINK-9880 URL: https://issues.apache.org/jira/browse/FLINK-9880 Project: Flink Issue Type: Bug Reporter: Ted Yu In StreamingFileSink.java : {code} bucketerContext.update(context.timestamp(), currentProcessingTime, context.currentWatermark()); {code} However, the method update is declared as : {code} void update(@Nullable Long elementTimestamp, long currentWatermark, long currentProcessingTime) { {code} The second and third parameters seem to be swapped. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-9849) Upgrade hbase version to 2.0.1 for hbase connector
Ted Yu created FLINK-9849: - Summary: Upgrade hbase version to 2.0.1 for hbase connector Key: FLINK-9849 URL: https://issues.apache.org/jira/browse/FLINK-9849 Project: Flink Issue Type: Improvement Reporter: Ted Yu Currently hbase 1.4.3 is used for hbase connector. We should upgrade to 2.0.1 which was recently released. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-9825) Upgrade checkstyle version to 8.6
Ted Yu created FLINK-9825: - Summary: Upgrade checkstyle version to 8.6 Key: FLINK-9825 URL: https://issues.apache.org/jira/browse/FLINK-9825 Project: Flink Issue Type: Improvement Reporter: Ted Yu We should upgrade checkstyle version to 8.6+ so that we can use the "match violation message to this regex" feature for suppression. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (FLINK-9824) Support IPv6 literal
Ted Yu created FLINK-9824: - Summary: Support IPv6 literal Key: FLINK-9824 URL: https://issues.apache.org/jira/browse/FLINK-9824 Project: Flink Issue Type: Bug Reporter: Ted Yu Currently we use colon as separator when parsing host and port. We should support the usage of IPv6 literals in parsing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9675) Avoid FileInputStream/FileOutputStream
[ https://issues.apache.org/jira/browse/FLINK-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9675: -- Description: They rely on finalizers (before Java 11), which create unnecessary GC load. The alternatives, Files.newInputStream, are as easy to use and don't have this issue. was:They rely on finalizers (before Java 11), which create unnecessary GC load. The alternatives, Files.newInputStream, are as easy to use and don't have this issue. > Avoid FileInputStream/FileOutputStream > -- > > Key: FLINK-9675 > URL: https://issues.apache.org/jira/browse/FLINK-9675 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Minor > Labels: filesystem > > They rely on finalizers (before Java 11), which create unnecessary GC load. > The alternatives, Files.newInputStream, are as easy to use and don't have > this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions
[ https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16540754#comment-16540754 ] Ted Yu commented on FLINK-9735: --- Short term, we should fix the leaked DBOptions instance. > Potential resource leak in RocksDBStateBackend#getDbOptions > --- > > Key: FLINK-9735 > URL: https://issues.apache.org/jira/browse/FLINK-9735 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Here is related code: > {code} > if (optionsFactory != null) { > opt = optionsFactory.createDBOptions(opt); > } > {code} > opt, an DBOptions instance, should be closed before being rewritten. > getColumnOptions has similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-6105) Properly handle InterruptedException in HadoopInputFormatBase
[ https://issues.apache.org/jira/browse/FLINK-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16307281#comment-16307281 ] Ted Yu edited comment on FLINK-6105 at 7/10/18 11:55 AM: - In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . was (Author: yuzhih...@gmail.com): In flink-connectors/flink-connector-filesystem/src/main/java/org/apache/flink/streaming/connectors/fs/RollingSink.java : {code} try { Thread.sleep(500); } catch (InterruptedException e1) { // ignore it } {code} Interrupt status should be restored, or throw InterruptedIOException . > Properly handle InterruptedException in HadoopInputFormatBase > - > > Key: FLINK-6105 > URL: https://issues.apache.org/jira/browse/FLINK-6105 > Project: Flink > Issue Type: Bug > Components: DataStream API >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Major > > When catching InterruptedException, we should throw InterruptedIOException > instead of IOException. > The following example is from HadoopInputFormatBase : > {code} > try { > splits = this.mapreduceInputFormat.getSplits(jobContext); > } catch (InterruptedException e) { > throw new IOException("Could not get Splits.", e); > } > {code} > There may be other places where IOE is thrown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9340) ScheduleOrUpdateConsumersTest may fail with Address already in use
[ https://issues.apache.org/jira/browse/FLINK-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9340: -- Labels: runtime (was: ) > ScheduleOrUpdateConsumersTest may fail with Address already in use > -- > > Key: FLINK-9340 > URL: https://issues.apache.org/jira/browse/FLINK-9340 > Project: Flink > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > Labels: runtime > > When ScheduleOrUpdateConsumersTest is run in the test suite, I saw: > {code} > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.034 sec <<< > FAILURE! - in > org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest > org.apache.flink.runtime.jobmanager.scheduler.ScheduleOrUpdateConsumersTest > Time elapsed: 8.034 sec <<< ERROR! > java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:433) > at sun.nio.ch.Net.bind(Net.java:425) > at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at > org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1081) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:502) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:487) > at > org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:904) > at > org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198) > at > org.apache.flink.shaded.netty4.io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357) > at > org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) > at > org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > {code} > Seems there was address / port conflict. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (FLINK-9675) Avoid FileInputStream/FileOutputStream
[ https://issues.apache.org/jira/browse/FLINK-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated FLINK-9675: -- Labels: filesystem (was: ) > Avoid FileInputStream/FileOutputStream > -- > > Key: FLINK-9675 > URL: https://issues.apache.org/jira/browse/FLINK-9675 > Project: Flink > Issue Type: Improvement >Reporter: Ted Yu >Assignee: zhangminglei >Priority: Minor > Labels: filesystem > > They rely on finalizers (before Java 11), which create unnecessary GC load. > The alternatives, Files.newInputStream, are as easy to use and don't have > this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-9736) Potential null reference in KeyGroupPartitionedPriorityQueue#poll()
[ https://issues.apache.org/jira/browse/FLINK-9736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533106#comment-16533106 ] Ted Yu commented on FLINK-9736: --- bq. we never call `poll()` on it. What happens to the {{peek}} call when heapOfKeyGroupHeaps is empty ? > Potential null reference in KeyGroupPartitionedPriorityQueue#poll() > --- > > Key: FLINK-9736 > URL: https://issues.apache.org/jira/browse/FLINK-9736 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > {code} > final PQ headList = heapOfkeyGroupedHeaps.peek(); > final T head = headList.poll(); > {code} > {{peek}} call may return null. > The return value should be checked before de-referencing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions
[ https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533101#comment-16533101 ] Ted Yu edited comment on FLINK-9735 at 7/4/18 11:15 PM: One improvement is to differentiate the return type from the parameter type. e.g. pass DBOptionsBuilder instance which can be enhanced by the OptionsFactory. Before the method returns, {{build}} method is called on the builder. This way, there would be no ambiguity. was (Author: yuzhih...@gmail.com): One improvement is to differentiate the return type from the parameter type. e.g. pass DBOptionsBuilder which can be enhanced by the OptionsFactory. Before the method returns, {{build}} method is called on the builder. This way, there would be no ambiguity. > Potential resource leak in RocksDBStateBackend#getDbOptions > --- > > Key: FLINK-9735 > URL: https://issues.apache.org/jira/browse/FLINK-9735 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Here is related code: > {code} > if (optionsFactory != null) { > opt = optionsFactory.createDBOptions(opt); > } > {code} > opt, an DBOptions instance, should be closed before being rewritten. > getColumnOptions has similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions
[ https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533101#comment-16533101 ] Ted Yu commented on FLINK-9735: --- One improvement is to differentiate the return type from the parameter type. e.g. pass DBOptionsBuilder which can be enhanced by the OptionsFactory. Before the method returns, {{build}} method is called on the builder. This way, there would be no ambiguity. > Potential resource leak in RocksDBStateBackend#getDbOptions > --- > > Key: FLINK-9735 > URL: https://issues.apache.org/jira/browse/FLINK-9735 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Here is related code: > {code} > if (optionsFactory != null) { > opt = optionsFactory.createDBOptions(opt); > } > {code} > opt, an DBOptions instance, should be closed before being rewritten. > getColumnOptions has similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (FLINK-9735) Potential resource leak in RocksDBStateBackend#getDbOptions
[ https://issues.apache.org/jira/browse/FLINK-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532961#comment-16532961 ] Ted Yu commented on FLINK-9735: --- I currently don't see what is wrong with the code in {{RocksDBResource}} - it only cares about SSD optimization. Even if you fix it to align with javadoc, I wonder why opt is assigned to again - if new attributes are added on top of existing opt, there is no need to assign again with the same reference. > Potential resource leak in RocksDBStateBackend#getDbOptions > --- > > Key: FLINK-9735 > URL: https://issues.apache.org/jira/browse/FLINK-9735 > Project: Flink > Issue Type: Bug >Reporter: Ted Yu >Assignee: vinoyang >Priority: Minor > > Here is related code: > {code} > if (optionsFactory != null) { > opt = optionsFactory.createDBOptions(opt); > } > {code} > opt, an DBOptions instance, should be closed before being rewritten. > getColumnOptions has similar issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)