[jira] [Created] (FLINK-13952) PartitionableTableSink can not work with OverwritableTableSink
Jingsong Lee created FLINK-13952: Summary: PartitionableTableSink can not work with OverwritableTableSink Key: FLINK-13952 URL: https://issues.apache.org/jira/browse/FLINK-13952 Project: Flink Issue Type: Bug Components: Table SQL / Planner Reporter: Jingsong Lee Fix For: 1.10.0 {code:java} tableSink match { case partitionableSink: PartitionableTableSink if partitionableSink.getPartitionFieldNames != null && partitionableSink.getPartitionFieldNames.nonEmpty => partitionableSink.setStaticPartition(insertOptions.staticPartitions) case overwritableTableSink: OverwritableTableSink => overwritableTableSink.setOverwrite(insertOptions.overwrite) {code} Code in TableEnvImpl and PlannerBase overwrite will not be invoked when there are static partition columns. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (FLINK-13949) Delete deduplicating JobVertexDetailsInfo.VertexTaskDetail
[ https://issues.apache.org/jira/browse/FLINK-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Till Rohrmann reassigned FLINK-13949: - Assignee: lining > Delete deduplicating JobVertexDetailsInfo.VertexTaskDetail > -- > > Key: FLINK-13949 > URL: https://issues.apache.org/jira/browse/FLINK-13949 > Project: Flink > Issue Type: Improvement > Components: Runtime / REST >Reporter: lining >Assignee: lining >Priority: Major > > As there is SubtaskExecutionAttemptDetailsInfo for subtask, so we can use it > replace JobVertexDetailsInfo.VertexTaskDetail. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13949) Delete deduplicating JobVertexDetailsInfo.VertexTaskDetail
[ https://issues.apache.org/jira/browse/FLINK-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921982#comment-16921982 ] Till Rohrmann commented on FLINK-13949: --- [~Zentol] is this a backwards compatible change? > Delete deduplicating JobVertexDetailsInfo.VertexTaskDetail > -- > > Key: FLINK-13949 > URL: https://issues.apache.org/jira/browse/FLINK-13949 > Project: Flink > Issue Type: Improvement > Components: Runtime / REST >Reporter: lining >Assignee: lining >Priority: Major > > As there is SubtaskExecutionAttemptDetailsInfo for subtask, so we can use it > replace JobVertexDetailsInfo.VertexTaskDetail. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x
flinkbot edited a comment on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x URL: https://github.com/apache/flink/pull/9580#issuecomment-526707203 ## CI report: * 9fcdfdece0af746c7d88a42cb512b2c44c75039c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125274599) * 334ade297027c3ed1d7ad7666e4b957206ea0c33 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125306507) * b8dc475b7766026ebda4f778209616357a42c98f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125307444) * 4cee29f4fd9335303f38e8b7e33fe98f75076c5c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125792172) * 74c064258f761162f91d4ec8364053b74d2ce48e : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TengHu commented on issue #8885: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload.
TengHu commented on issue #8885: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload. URL: https://github.com/apache/flink/pull/8885#issuecomment-527762679 Congrats on the 1.9 release, do you guys have time to take a look at this ? @rmetzger @zentol @twalthr @StefanRRichter Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9606: [FLINK-13677][docs-zh] Translate "Monitoring Back Pressure" page into Chinese
flinkbot edited a comment on issue #9606: [FLINK-13677][docs-zh] Translate "Monitoring Back Pressure" page into Chinese URL: https://github.com/apache/flink/pull/9606#issuecomment-527740326 ## CI report: * 17b141e4c01648d012fa722cc927803785e4204b : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125834960) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8885: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload.
flinkbot edited a comment on issue #8885: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload. URL: https://github.com/apache/flink/pull/8885#issuecomment-513164005 ## CI report: * 19a0f944f4c8b2177afe5b41df587b89daa0d008 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119766396) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x
flinkbot edited a comment on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x URL: https://github.com/apache/flink/pull/9580#issuecomment-526707203 ## CI report: * 9fcdfdece0af746c7d88a42cb512b2c44c75039c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125274599) * 334ade297027c3ed1d7ad7666e4b957206ea0c33 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125306507) * b8dc475b7766026ebda4f778209616357a42c98f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125307444) * 4cee29f4fd9335303f38e8b7e33fe98f75076c5c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125792172) * 74c064258f761162f91d4ec8364053b74d2ce48e : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125840319) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13902) Can not use index to convert FieldReferenceExpression to RexNode
[ https://issues.apache.org/jira/browse/FLINK-13902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingsong Lee updated FLINK-13902: - Parent: FLINK-13773 Issue Type: Sub-task (was: Bug) > Can not use index to convert FieldReferenceExpression to RexNode > > > Key: FLINK-13902 > URL: https://issues.apache.org/jira/browse/FLINK-13902 > Project: Flink > Issue Type: Sub-task > Components: Table SQL / Planner >Reporter: Jingsong Lee >Priority: Major > > Now, we can not use inputCount+inputIndex+FieldIndex to construct rex input > reference of calcite. > See QueryOperationConverter.SingleRelVisitor.visit(AggregateQueryOperation). > Calcite will shuffle the output order of groupings.(See RelBuilder.aggregate, > it use ImmutableBitSet to store groupings) So the output fields order will be > changed too. This lead to the output fields orders of > AggregateOperationFactory is different from calcite output orders. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9595: [FLINK-13586] Make ClosureCleaner.clean() backwards compatible with 1.8.0
flinkbot edited a comment on issue #9595: [FLINK-13586] Make ClosureCleaner.clean() backwards compatible with 1.8.0 URL: https://github.com/apache/flink/pull/9595#issuecomment-527065432 ## CI report: * adf5bc3cb159606d5ba0c659dfb5d0b68b814c3c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125404540) * a08fa2a229214645b5e8b1ba9a260f427376987f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125567671) * 148827dece734ce314daf5114be1f72607db : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125588673) * 655c78f31b45ec3a96e53e269fca64b4682025ff : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13417) Bump Zookeeper to 3.5.5
[ https://issues.apache.org/jira/browse/FLINK-13417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922289#comment-16922289 ] Till Rohrmann commented on FLINK-13417: --- Ok, so you are saying that the upgrade from ZooKeeper 3.4 to 3.5 won't cause any incompatibilities. Only if we start using ZooKeeper 3.5 features such as the {{CreateMode.CONTAINER}} our users would need to upgrade to ZooKeeper 3.5 as well. Is this correct? > Bump Zookeeper to 3.5.5 > --- > > Key: FLINK-13417 > URL: https://issues.apache.org/jira/browse/FLINK-13417 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.9.0 >Reporter: Konstantin Knauf >Priority: Blocker > Fix For: 1.10.0 > > > User might want to secure their Zookeeper connection via SSL. > This requires a Zookeeper version >= 3.5.1. We might as well try to bump it > to 3.5.5, which is the latest version. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (FLINK-13936) NOTICE-binary is outdated
[ https://issues.apache.org/jira/browse/FLINK-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler reassigned FLINK-13936: Assignee: Chesnay Schepler > NOTICE-binary is outdated > - > > Key: FLINK-13936 > URL: https://issues.apache.org/jira/browse/FLINK-13936 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.10.0 >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Blocker > Fix For: 1.10.0 > > > The NOTICE-binary wasn't updated for the click-event example, the state > processing API and changes to the table API packaging. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13952) PartitionableTableSink can not work with OverwritableTableSink
[ https://issues.apache.org/jira/browse/FLINK-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922341#comment-16922341 ] Jingsong Lee commented on FLINK-13952: -- Feel free to fix it. I can review your PR. > PartitionableTableSink can not work with OverwritableTableSink > -- > > Key: FLINK-13952 > URL: https://issues.apache.org/jira/browse/FLINK-13952 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > {code:java} > tableSink match { > case partitionableSink: PartitionableTableSink > if partitionableSink.getPartitionFieldNames != null > && partitionableSink.getPartitionFieldNames.nonEmpty => > partitionableSink.setStaticPartition(insertOptions.staticPartitions) > case overwritableTableSink: OverwritableTableSink => > overwritableTableSink.setOverwrite(insertOptions.overwrite) > {code} > Code in TableEnvImpl and PlannerBase > overwrite will not be invoked when there are static partition columns. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] asfgit merged pull request #9595: [FLINK-13586] Make ClosureCleaner.clean() backwards compatible with 1.8.0
asfgit merged pull request #9595: [FLINK-13586] Make ClosureCleaner.clean() backwards compatible with 1.8.0 URL: https://github.com/apache/flink/pull/9595 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320689758 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java ## @@ -103,55 +94,21 @@ * if that is not possible. * * @param flinkConfig The config used to obtain the job-manager's address, and used to configure the optimizer. -* -* @throws Exception we cannot create the high availability services */ - public ClusterClient(Configuration flinkConfig) throws Exception { - this( - flinkConfig, - HighAvailabilityServicesUtils.createHighAvailabilityServices( - flinkConfig, - Executors.directExecutor(), - HighAvailabilityServicesUtils.AddressResolution.TRY_ADDRESS_RESOLUTION), - false); - } - - /** -* Creates a instance that submits the programs to the JobManager defined in the -* configuration. This method will try to resolve the JobManager hostname and throw an exception -* if that is not possible. -* -* @param flinkConfig The config used to obtain the job-manager's address, and used to configure the optimizer. -* @param highAvailabilityServices HighAvailabilityServices to use for leader retrieval -* @param sharedHaServices true if the HighAvailabilityServices are shared and must not be shut down -*/ - public ClusterClient( - Configuration flinkConfig, - HighAvailabilityServices highAvailabilityServices, - boolean sharedHaServices) { + public ClusterClient(Configuration flinkConfig) { this.flinkConfig = Preconditions.checkNotNull(flinkConfig); this.compiler = new Optimizer(new DataStatistics(), new DefaultCostEstimator(), flinkConfig); - this.timeout = AkkaUtils.getClientTimeout(flinkConfig); - - this.highAvailabilityServices = Preconditions.checkNotNull(highAvailabilityServices); - this.sharedHaServices = sharedHaServices; } // // Startup & Shutdown // /** -* Shuts down the client. This stops the internal actor system and actors. +* Shuts down the client. This stops possible internal services. */ - public void shutdown() throws Exception { - synchronized (this) { - if (!sharedHaServices && highAvailabilityServices != null) { - highAvailabilityServices.close(); - } - } - } + public abstract void shutdown() throws Exception; Review comment: Make sense we can change it into ``` /** * User overridable hook to close the client, possibly closes internal services. */ public void close() throws Exception { } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690369 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/MiniClusterClient.java ## @@ -51,14 +51,13 @@ private final MiniCluster miniCluster; public MiniClusterClient(@Nonnull Configuration configuration, @Nonnull MiniCluster miniCluster) { - super(configuration, miniCluster.getHighAvailabilityServices(), true); - + super(configuration); this.miniCluster = miniCluster; } @Override public void shutdown() throws Exception { - super.shutdown(); + // no op Review comment: Let's do it in a following issue that toward to an `ClusterClient` interface(instead of abstract class). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690390 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/MiniClusterClient.java ## @@ -51,14 +51,13 @@ private final MiniCluster miniCluster; public MiniClusterClient(@Nonnull Configuration configuration, @Nonnull MiniCluster miniCluster) { - super(configuration, miniCluster.getHighAvailabilityServices(), true); - + super(configuration); this.miniCluster = miniCluster; } @Override public void shutdown() throws Exception { - super.shutdown(); + // no op Review comment: Reverted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320694943 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/ClusterClient.java ## @@ -103,55 +94,21 @@ * if that is not possible. * * @param flinkConfig The config used to obtain the job-manager's address, and used to configure the optimizer. -* -* @throws Exception we cannot create the high availability services */ - public ClusterClient(Configuration flinkConfig) throws Exception { - this( - flinkConfig, - HighAvailabilityServicesUtils.createHighAvailabilityServices( - flinkConfig, - Executors.directExecutor(), - HighAvailabilityServicesUtils.AddressResolution.TRY_ADDRESS_RESOLUTION), - false); - } - - /** -* Creates a instance that submits the programs to the JobManager defined in the -* configuration. This method will try to resolve the JobManager hostname and throw an exception -* if that is not possible. -* -* @param flinkConfig The config used to obtain the job-manager's address, and used to configure the optimizer. -* @param highAvailabilityServices HighAvailabilityServices to use for leader retrieval -* @param sharedHaServices true if the HighAvailabilityServices are shared and must not be shut down -*/ - public ClusterClient( - Configuration flinkConfig, - HighAvailabilityServices highAvailabilityServices, - boolean sharedHaServices) { + public ClusterClient(Configuration flinkConfig) { this.flinkConfig = Preconditions.checkNotNull(flinkConfig); this.compiler = new Optimizer(new DataStatistics(), new DefaultCostEstimator(), flinkConfig); - this.timeout = AkkaUtils.getClientTimeout(flinkConfig); - - this.highAvailabilityServices = Preconditions.checkNotNull(highAvailabilityServices); - this.sharedHaServices = sharedHaServices; } // // Startup & Shutdown // /** -* Shuts down the client. This stops the internal actor system and actors. +* Shuts down the client. This stops possible internal services. */ - public void shutdown() throws Exception { - synchronized (this) { - if (!sharedHaServices && highAvailabilityServices != null) { - highAvailabilityServices.close(); - } - } - } + public abstract void shutdown() throws Exception; Review comment: Investigate the usage, I'd like to introduce an empty impl in this pass and start a separated JIRA to rename `shutdown` to `close` in order to follow `AutoClosable` convention. We won't lose any functionality then. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex
flinkbot edited a comment on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex URL: https://github.com/apache/flink/pull/9601#issuecomment-527404637 ## CI report: * 13d895349390698404a375fb4362a41b736ab0c6 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125577621) * 5a1866be2d9765b0e941a531d5e9dda8616fecf4 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125842787) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13953) Facilitate enabling new Scheduler in MiniCluster Tests
[ https://issues.apache.org/jira/browse/FLINK-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922240#comment-16922240 ] Gary Yao commented on FLINK-13953: -- There exists a prototype by [~till.rohrmann] https://github.com/tillrohrmann/flink/tree/introduceSchedulerSwitch Another possibility that I can think of is to use Junit's parameterized tests. > Facilitate enabling new Scheduler in MiniCluster Tests > -- > > Key: FLINK-13953 > URL: https://issues.apache.org/jira/browse/FLINK-13953 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination, Tests >Reporter: Gary Yao >Assignee: Gary Yao >Priority: Major > > Currently, tests using the {{MiniCluster}} use the legacy scheduler by > default. Once the new scheduler is implemented, we should run tests with the > new scheduler enabled. However, it is not expected that all tests will pass > immediately. Therefore, it should be possible to enable the new scheduler for > a subset of tests. > *Acceptance Criteria* > * Tests using {{MiniCluster}} are run on a per-commit basis (on Travis) > against new scheduler and also legacy scheduler -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13946) Remove deactivated JobSession-related code.
[ https://issues.apache.org/jira/browse/FLINK-13946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-13946: --- Labels: pull-request-available (was: ) > Remove deactivated JobSession-related code. > --- > > Key: FLINK-13946 > URL: https://issues.apache.org/jira/browse/FLINK-13946 > Project: Flink > Issue Type: Improvement > Components: Client / Job Submission >Affects Versions: 1.9.0 >Reporter: Kostas Kloudas >Assignee: Kostas Kloudas >Priority: Major > Labels: pull-request-available > > This issue refers to removing the code related to job session as described in > [FLINK-2097|https://issues.apache.org/jira/browse/FLINK-2097]. The feature > is deactivated, as pointed by the comment > [here|https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L285] > and it complicates the code paths related to job submission, namely the > lifecycle of the Remote and LocalExecutors. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x
flinkbot edited a comment on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x URL: https://github.com/apache/flink/pull/9580#issuecomment-526707203 ## CI report: * 9fcdfdece0af746c7d88a42cb512b2c44c75039c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125274599) * 334ade297027c3ed1d7ad7666e4b957206ea0c33 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125306507) * b8dc475b7766026ebda4f778209616357a42c98f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125307444) * 4cee29f4fd9335303f38e8b7e33fe98f75076c5c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125792172) * 74c064258f761162f91d4ec8364053b74d2ce48e : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125840319) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] kl0u opened a new pull request #9607: [FLINK-13946] Remove job session related code from ExecutionEnvironment
kl0u opened a new pull request #9607: [FLINK-13946] Remove job session related code from ExecutionEnvironment URL: https://github.com/apache/flink/pull/9607 ## What is the purpose of the change This PR removes code related to JobSessions from the `ExecutionEnvironment` and the `PlanExecutor`s. This code was added in the context of [FLINK-2097](https://issues.apache.org/jira/browse/FLINK-2097) but it was never activated, as illustrated by the comment at [ExecutionEnvironment.java#L285 ](https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L285). The work in this PR is part of the preparation for the upcoming re-design of the whole Client/Executor API. ## Brief change log The changes in the subclasses of the `ExecutionEnvironment` remove methods that were setting session-related parameters and reflect the simplification of the `PlanExecutor` _lifecycle_ explained below (for `Local` and `RemoteEnvironment`). The changes to the `PlanExecutors` have to do with the executor's lifecycle. Now the executor itself controls its lifecycle (`start()` and `stop()` are `private`) and we instantiate an executor for each call to `executePlan()`. This allows to get rid of the reapers from the `Local` and `RemoteEnvironments` and the `lock` that protected concurrent access to the executor's state. The lifecycle is more explicit now and aligned with the current use of the `ExecutionEnvironment`. If in the future we choose to change this and decide to re-use execution environments, then we can add this functionality back, potentially under a different design/architecture. ## Verifying this change This change is a code cleanup so it is covered by existing tests. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (yes / **no**) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**) - The serializers: (yes / **no** / don't know) - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know) - The S3 file system connector: (yes / **no** / don't know) ## Documentation - Does this pull request introduce a new feature? (yes / **no**) - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented) Please have a look at this one @aljoscha and @tillrohrmann. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13940) S3RecoverableWriter causes job to get stuck in recovery
[ https://issues.apache.org/jira/browse/FLINK-13940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922250#comment-16922250 ] Jimmy Weibel Rasmussen commented on FLINK-13940: Thank you Kostas! Thanks for explaining the problems with the state backend solution. > S3RecoverableWriter causes job to get stuck in recovery > --- > > Key: FLINK-13940 > URL: https://issues.apache.org/jira/browse/FLINK-13940 > Project: Flink > Issue Type: Bug > Components: Connectors / FileSystem >Affects Versions: 1.8.0, 1.8.1, 1.9.0 >Reporter: Jimmy Weibel Rasmussen >Assignee: Kostas Kloudas >Priority: Major > Fix For: 1.10.0, 1.9.1 > > > > The cleaning up of tmp files in S3 introduced by this ticket/PR: > https://issues.apache.org/jira/browse/FLINK-10963 > is preventing the flink job from being able to recover under some > circumstances. > > This is what seems to be happening: > When the jobs tries to recover, it will call initializeState() on all > operators, which results in the Bucket.restoreInProgressFile method being > called. > This will download the part_tmp file mentioned in the checkpoint that we're > restoring from, and finally it will call fsWriter.cleanupRecoverableState > which deletes the part_tmp file in S3. > Now the open() method is called on all operators. If the open() call fails > for one of the operators (this might happen if the issue that caused the job > to fail and restart is still unresolved), the job will fail again and try to > restart from the same checkpoint as before. This time however, downloading > the part_tmp file mentioned in the checkpoint fails because it was deleted > during the last recover attempt. > The bug is critical because it results in data loss. > > > > I discovered the bug because I have a flink job with a RabbitMQ source and a > StreamingFileSink that writes to S3 (and therefore uses the > S3RecoverableWriter). > Occasionally I have some RabbitMQ connection issues which causes the job to > fail and restart, sometimes the first few restart attempts fail because > rabbitmq is unreachable when flink tries to reconnect. > > This is what I was seeing: > RabbitMQ goes down > Job fails because of a RabbitMQ ConsumerCancelledException > Job attempts to restart but fails with a Rabbitmq connection exception (x > number of times) > RabbitMQ is back up > Job attempts to restart but fails with a FileNotFoundException due to some > _part_tmp file missing in S3. > > The job will be unable to restart and only option is to cancel and restart > the job (and loose all state) > > > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151 ## CI report: * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119421914) * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119441376) * 9c14836f8639e98d58cf7bb32e38b938b3843994 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119577044) * 76186776c5620598a19234245bbd05dfdfb1c62c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120113740) * 628ca7b316ad3968c90192a47a84dd01f26e2578 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122381349) * d204a725ff3c8a046cbd1b84e34d9e3ae8aafeac : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123620485) * 143efadbdb6c4681569d5b412a175edfb1633b85 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123637809) * b78b64a82ed2a9a92886095ec42f06d5082ad830 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123671219) * 5145a0b9d6b320456bb971d96b9cc47707c8fd28 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125476639) * 0d4d944c28c59ca1caa6c453c347ec786b40d245 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125762588) * 91552c3804f5e96cc573e6ed48756f2b54c037d4 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125844084) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13952) PartitionableTableSink can not work with OverwritableTableSink
[ https://issues.apache.org/jira/browse/FLINK-13952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922319#comment-16922319 ] Rui Li commented on FLINK-13952: [~lzljs3620320] Thanks for reporting the issue. Let me know if you want to work on this, otherwise I'll submit a PR for it. > PartitionableTableSink can not work with OverwritableTableSink > -- > > Key: FLINK-13952 > URL: https://issues.apache.org/jira/browse/FLINK-13952 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Jingsong Lee >Priority: Major > Fix For: 1.10.0 > > > {code:java} > tableSink match { > case partitionableSink: PartitionableTableSink > if partitionableSink.getPartitionFieldNames != null > && partitionableSink.getPartitionFieldNames.nonEmpty => > partitionableSink.setStaticPartition(insertOptions.staticPartitions) > case overwritableTableSink: OverwritableTableSink => > overwritableTableSink.setOverwrite(insertOptions.overwrite) > {code} > Code in TableEnvImpl and PlannerBase > overwrite will not be invoked when there are static partition columns. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Closed] (FLINK-13586) Method ClosureCleaner.clean broke backward compatibility between 1.8.0 and 1.8.1
[ https://issues.apache.org/jira/browse/FLINK-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aljoscha Krettek closed FLINK-13586. Resolution: Fixed Resolved on release-1.8 in 655c78f31b45ec3a96e53e269fca64b4682025ff > Method ClosureCleaner.clean broke backward compatibility between 1.8.0 and > 1.8.1 > > > Key: FLINK-13586 > URL: https://issues.apache.org/jira/browse/FLINK-13586 > Project: Flink > Issue Type: Bug > Components: API / DataStream >Affects Versions: 1.8.1, 1.9.0, 1.10.0 >Reporter: Gaël Renoux >Assignee: Aljoscha Krettek >Priority: Major > Labels: pull-request-available > Fix For: 1.8.2 > > Time Spent: 20m > Remaining Estimate: 0h > > Method clean in org.apache.flink.api.java.ClosureCleaner received a new > parameter in Flink 1.8.1. This class is noted as internal, but is used in the > Kafka connectors (in > org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase). > The Kafka connectors library is not provided by the server, and must be set > up as a dependency with compile scope (see > https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#usage, > or the Maven project template). Any project using those connectors and built > with 1.8.0 cannot be deployed on a 1.8.1 Flink server, because it would > target the old method. > => This methods needs a fallback with the original two arguments (setting a > default value of RECURSIVE for the level argument). -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9591: [FLINK-13937][docs] Fix the error of the hive connector dependency ve…
flinkbot edited a comment on issue #9591: [FLINK-13937][docs] Fix the error of the hive connector dependency ve… URL: https://github.com/apache/flink/pull/9591#issuecomment-527020451 ## CI report: * 9f75a4f6da30d5e22fa8594a628ce1e937b5eb10 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125389706) * 6e3b8004605f8a9deb6adf5b7c5444bc73f3102e : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125858224) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320693449 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/zookeeper/ZKClientHAServices.java ## @@ -0,0 +1,57 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.highavailability.zookeeper; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.runtime.highavailability.ClientHighAvailabilityServices; +import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService; +import org.apache.flink.runtime.util.ZooKeeperUtils; + +import org.apache.curator.framework.CuratorFramework; + +import javax.annotation.Nonnull; + +/** + * ZooKeeper based implementation for {@link ClientHighAvailabilityServices}. + */ +public class ZKClientHAServices implements ClientHighAvailabilityServices { Review comment: It's OK to me. I have a short offline discussion with Andrey about use widely-used abbr. in our community such as "HA" "ZK" and "Chk". Maybe it worth a community-wide discussion but let's keep consistency here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-12482) Make checkpoint trigger/notifyComplete run via the mailbox queue
[ https://issues.apache.org/jira/browse/FLINK-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922371#comment-16922371 ] Alex commented on FLINK-12482: -- PR: [https://github.com/apache/flink/pull/9564] > Make checkpoint trigger/notifyComplete run via the mailbox queue > > > Key: FLINK-12482 > URL: https://issues.apache.org/jira/browse/FLINK-12482 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: Stefan Richter >Assignee: Alex >Priority: Major > > For the stream source, we also need to enqueue checkpoint related signals > (trigger, notifyComplete) to the mailbox now so that they run in the stream > task's main-thread. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
flinkbot edited a comment on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#issuecomment-527816485 ## CI report: * a658808b883045ccb7d6c5124be2130de8a79fbd : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125860328) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] jinglining commented on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex
jinglining commented on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex URL: https://github.com/apache/flink/pull/9601#issuecomment-527860492 > This is rather unintuitive. The column key is "ID" but the values links to some log file. > We need a better column key, and imo shouldn't directly link to the log file but instead the page of the host TE. This is a part for log change, then user can easily see log from job. So add this log link. And I have discussed with till This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] jinglining edited a comment on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex
jinglining edited a comment on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex URL: https://github.com/apache/flink/pull/9601#issuecomment-527860492 > This is rather unintuitive. The column key is "ID" but the values links to some log file. > We need a better column key, and imo shouldn't directly link to the log file but instead the page of the host TE. This is a part for log change, then user can easily see log from job. So add this log link. And I have discussed with till This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
flinkbot edited a comment on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#issuecomment-527816485 ## CI report: * a658808b883045ccb7d6c5124be2130de8a79fbd : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125860328) * dad2525702bf24e0f0ecaf6f5f69a55f8da0b8f6 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125876658) * 1cb00f6d0ec01d7e2b5fd291eb4a503920888545 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration
flinkbot edited a comment on issue #8303: [FLINK-12343] [flink-yarn] add file replication config for yarn configuration URL: https://github.com/apache/flink/pull/8303#issuecomment-511684151 ## CI report: * 6a7ca58b4a04f6dce250045e021702e67e82b893 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119421914) * 4d38a8df0d59734c4b2386689a2f17b9f2b44b12 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119441376) * 9c14836f8639e98d58cf7bb32e38b938b3843994 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/119577044) * 76186776c5620598a19234245bbd05dfdfb1c62c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/120113740) * 628ca7b316ad3968c90192a47a84dd01f26e2578 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122381349) * d204a725ff3c8a046cbd1b84e34d9e3ae8aafeac : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123620485) * 143efadbdb6c4681569d5b412a175edfb1633b85 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123637809) * b78b64a82ed2a9a92886095ec42f06d5082ad830 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/123671219) * 5145a0b9d6b320456bb971d96b9cc47707c8fd28 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125476639) * 0d4d944c28c59ca1caa6c453c347ec786b40d245 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125762588) * 91552c3804f5e96cc573e6ed48756f2b54c037d4 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125844084) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Closed] (FLINK-13779) PrometheusPushGatewayReporter support push metrics with groupingKey
[ https://issues.apache.org/jira/browse/FLINK-13779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chesnay Schepler closed FLINK-13779. Fix Version/s: 1.10.0 Resolution: Fixed master: cf4258533b7e3a0d62613d3e00e306d21fb2b649 > PrometheusPushGatewayReporter support push metrics with groupingKey > --- > > Key: FLINK-13779 > URL: https://issues.apache.org/jira/browse/FLINK-13779 > Project: Flink > Issue Type: New Feature > Components: Runtime / Metrics >Reporter: Kaibo Zhou >Assignee: Kaibo Zhou >Priority: Minor > Labels: pull-request-available > Fix For: 1.10.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The Prometheus push gateway java SDK support send metrics with _groupingKey_, > see > [doc|https://prometheus.github.io/client_java/io/prometheus/client/exporter/PushGateway.html#push-io.prometheus.client.CollectorRegistry-java.lang.String-java.util.Map-]. > > This feature will make it convenient for users to identify, group or filter > their metrics by defining _groupingKey (optional)_. The user does not need to > configure this by default, and the default behavior remains the same. > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Created] (FLINK-13953) Facilitate enabling new Scheduler in MiniCluster Tests
Gary Yao created FLINK-13953: Summary: Facilitate enabling new Scheduler in MiniCluster Tests Key: FLINK-13953 URL: https://issues.apache.org/jira/browse/FLINK-13953 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination, Tests Reporter: Gary Yao Assignee: Gary Yao Currently, tests using the {{MiniCluster}} use the legacy scheduler by default. Once the new scheduler is implemented, we should run tests with the new scheduler enabled. However, it is not expected that all tests will pass immediately. Therefore, it should be possible to enable the new scheduler for a subset of tests. *Acceptance Criteria* * Tests using {{MiniCluster}} are run on a per-commit basis (on Travis) against new scheduler and also legacy scheduler -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-4399) Add support for oversized messages
[ https://issues.apache.org/jira/browse/FLINK-4399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922281#comment-16922281 ] Till Rohrmann commented on FLINK-4399: -- Sounds good to me [~SleePy]. I guess you wanted to say "it's not a critical issue", right? > Add support for oversized messages > -- > > Key: FLINK-4399 > URL: https://issues.apache.org/jira/browse/FLINK-4399 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination > Environment: FLIP-6 feature branch >Reporter: Stephan Ewen >Assignee: Biao Liu >Priority: Major > Labels: flip-6 > > Currently, messages larger than the maximum Akka Framesize cause an error > when being transported. We should add a way to pass messages that are larger > than the Framesize, as may happen for: > - {{collect()}} calls that collect large data sets (via accumulators) > - Job submissions and operator deployments where the functions closures are > large (for example because it contains large pre-loaded data) > - Function restore in cases where restored state is larger than > checkpointed state (union state) > I suggest to use the {{BlobManager}} to transfer large payload. > - On the sender side, oversized messages are stored under a transient blob > (which is deleted after first retrieval, or after a certain number of minutes) > - The sender sends a "pointer to blob message" instead. > - The receiver grabs the message from the blob upon receiving the pointer > message > The RPC Service should be optionally initializable with a "large message > handler" which is internally the {{BlobManager}}. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] yangjf2019 commented on a change in pull request #9591: [FLINK-13937][docs] Fix the error of the hive connector dependency ve…
yangjf2019 commented on a change in pull request #9591: [FLINK-13937][docs] Fix the error of the hive connector dependency ve… URL: https://github.com/apache/flink/pull/9591#discussion_r320638910 ## File path: docs/dev/table/hive/index.md ## @@ -79,7 +79,7 @@ To integrate with Hive, users need the following dependencies in their project. org.apache.flink flink-shaded-hadoop-2-uber - 2.7.5-{{site.version}} + 2.7.5-7.0 Review comment: Thanks ,I got it, and I had added this sentence after that. `# Plain flink-shaded version is needed for e.g. the hive connector. ` `# Please update the shaded_version once new flink-shaded is released. ` ` shaded_version: "7.0" ` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9608: [FLINK-13776][table] Introduce new interfaces to BuiltInFunctionDefinition
flinkbot commented on issue #9608: [FLINK-13776][table] Introduce new interfaces to BuiltInFunctionDefinition URL: https://github.com/apache/flink/pull/9608#issuecomment-527810896 ## CI report: * 44ce459cb0c516fd5fb0d0b7aa775231d0f572a9 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9608: [FLINK-13776][table] Introduce new interfaces to BuiltInFunctionDefinition
flinkbot edited a comment on issue #9608: [FLINK-13776][table] Introduce new interfaces to BuiltInFunctionDefinition URL: https://github.com/apache/flink/pull/9608#issuecomment-527810896 ## CI report: * 44ce459cb0c516fd5fb0d0b7aa775231d0f572a9 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125858277) * a79f328a66d7903ce7fe9c12af57380ca4e1cbdd : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
flinkbot commented on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#issuecomment-527816485 ## CI report: * a658808b883045ccb7d6c5124be2130de8a79fbd : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Created] (FLINK-13954) Clean up ExecutionEnvironment / JobSubmission code paths
Kostas Kloudas created FLINK-13954: -- Summary: Clean up ExecutionEnvironment / JobSubmission code paths Key: FLINK-13954 URL: https://issues.apache.org/jira/browse/FLINK-13954 Project: Flink Issue Type: Improvement Components: Client / Job Submission Affects Versions: 1.9.0 Reporter: Kostas Kloudas This is an umbrella issue to serve as a hub for all issues related to job submission / (stream) execution environment refactoring. This issue does not change any existing functionality, but it targets to clean up / rearrange the code in the relevant components so that further changes are easier to apply. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Updated] (FLINK-13946) Remove deactivated JobSession-related code.
[ https://issues.apache.org/jira/browse/FLINK-13946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kostas Kloudas updated FLINK-13946: --- Parent: FLINK-13954 Issue Type: Sub-task (was: Improvement) > Remove deactivated JobSession-related code. > --- > > Key: FLINK-13946 > URL: https://issues.apache.org/jira/browse/FLINK-13946 > Project: Flink > Issue Type: Sub-task > Components: Client / Job Submission >Affects Versions: 1.9.0 >Reporter: Kostas Kloudas >Assignee: Kostas Kloudas >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > This issue refers to removing the code related to job session as described in > [FLINK-2097|https://issues.apache.org/jira/browse/FLINK-2097]. The feature > is deactivated, as pointed by the comment > [here|https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L285] > and it complicates the code paths related to job submission, namely the > lifecycle of the Remote and LocalExecutors. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9595: [FLINK-13586] Make ClosureCleaner.clean() backwards compatible with 1.8.0
flinkbot edited a comment on issue #9595: [FLINK-13586] Make ClosureCleaner.clean() backwards compatible with 1.8.0 URL: https://github.com/apache/flink/pull/9595#issuecomment-527065432 ## CI report: * adf5bc3cb159606d5ba0c659dfb5d0b68b814c3c : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125404540) * a08fa2a229214645b5e8b1ba9a260f427376987f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125567671) * 148827dece734ce314daf5114be1f72607db : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125588673) * 655c78f31b45ec3a96e53e269fca64b4682025ff : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125854235) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner
flinkbot edited a comment on issue #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner URL: https://github.com/apache/flink/pull/8859#issuecomment-518729517 ## CI report: * c2adafa7ee87b18ba6af0b5f518251150a8da386 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122142106) * 18f834bbe113eb88826fc04cdc017a856e10d3d0 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125869236) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
flinkbot edited a comment on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#issuecomment-527816485 ## CI report: * a658808b883045ccb7d6c5124be2130de8a79fbd : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125860328) * dad2525702bf24e0f0ecaf6f5f69a55f8da0b8f6 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner
flinkbot edited a comment on issue #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner URL: https://github.com/apache/flink/pull/8859#issuecomment-518729517 ## CI report: * c2adafa7ee87b18ba6af0b5f518251150a8da386 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122142106) * 18f834bbe113eb88826fc04cdc017a856e10d3d0 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125869236) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13936) NOTICE-binary is outdated
[ https://issues.apache.org/jira/browse/FLINK-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922287#comment-16922287 ] Till Rohrmann commented on FLINK-13936: --- Do you have time to tackle this issue [~Zentol] since it is a blocker issue. > NOTICE-binary is outdated > - > > Key: FLINK-13936 > URL: https://issues.apache.org/jira/browse/FLINK-13936 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.10.0 >Reporter: Chesnay Schepler >Priority: Blocker > Fix For: 1.10.0 > > > The NOTICE-binary wasn't updated for the click-event example, the state > processing API and changes to the table API packaging. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13945) Vendor-repos Maven profile doesn't exist in flink-shaded
[ https://issues.apache.org/jira/browse/FLINK-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922285#comment-16922285 ] Till Rohrmann commented on FLINK-13945: --- Thanks for reporting this issue [~Elise Ramé]. [~Zentol] I guess it could be helpful to update the documentation how to build flink shaded against a vendor specific Hadoop version and update the Flink documentation accordingly. > Vendor-repos Maven profile doesn't exist in flink-shaded > > > Key: FLINK-13945 > URL: https://issues.apache.org/jira/browse/FLINK-13945 > Project: Flink > Issue Type: Bug > Components: BuildSystem / Shaded >Affects Versions: shaded-7.0, shaded-8.0, shaded-9.0 >Reporter: Elise Ramé >Priority: Major > > According to > [documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.9/flinkDev/building.html#custom--vendor-specific-versions], > to build Flink against a vendor specific Hadoop version it is necessary to > build flink-shaded against this version first : > {code:bash} > mvn clean install -DskipTests -Pvendor-repos -Dhadoop.version= > {code} > vendor-repos profile has to be activated to include Hadoop vendors > repositories. > But Maven cannot find expected Hadoop dependencies and returns an error > because vendor-repos profile isn't defined in flink-shaded. > Example using flink-shaded 8.0 and HDP 2.6.5 Hadoop version : > {code:bash} > mvn clean install -DskipTests -Pvendor-repos > -Dhadoop.version=2.7.3.2.6.5.0-292 > {code} > {code:bash} > [INFO] ---< org.apache.flink:flink-shaded-hadoop-2 > >--- > [INFO] Building flink-shaded-hadoop-2 2.7.3.2.6.5.0-292-8.0 > [10/11] > [INFO] [ jar > ]- > [WARNING] The POM for org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-292 > is missing, no dependency information available > [WARNING] The POM for org.apache.hadoop:hadoop-hdfs:jar:2.7.3.2.6.5.0-292 is > missing, no dependency information available > [WARNING] The POM for > org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.7.3.2.6.5.0-292 is > missing, no dependency information available > [WARNING] The POM for > org.apache.hadoop:hadoop-yarn-client:jar:2.7.3.2.6.5.0-292 is missing, no > dependency information available > [WARNING] The POM for > org.apache.hadoop:hadoop-yarn-common:jar:2.7.3.2.6.5.0-292 is missing, no > dependency information available > [INFO] > > [INFO] Reactor Summary: > [INFO] > [INFO] flink-shaded 8.0 ... SUCCESS [ 2.122 > s] > [INFO] flink-shaded-force-shading 8.0 . SUCCESS [ 0.607 > s] > [INFO] flink-shaded-asm-7 7.1-8.0 . SUCCESS [ 0.667 > s] > [INFO] flink-shaded-guava-18 18.0-8.0 . SUCCESS [ 1.452 > s] > [INFO] flink-shaded-netty-4 4.1.39.Final-8.0 .. SUCCESS [ 4.597 > s] > [INFO] flink-shaded-netty-tcnative-dynamic 2.0.25.Final-8.0 SUCCESS [ 0.620 > s] > [INFO] flink-shaded-jackson-parent 2.9.8-8.0 .. SUCCESS [ 0.018 > s] > [INFO] flink-shaded-jackson-2 2.9.8-8.0 ... SUCCESS [ 0.914 > s] > [INFO] flink-shaded-jackson-module-jsonSchema-2 2.9.8-8.0 . SUCCESS [ 0.627 > s] > [INFO] flink-shaded-hadoop-2 2.7.3.2.6.5.0-292-8.0 FAILURE [ 0.047 > s] > [INFO] flink-shaded-hadoop-2-uber 2.7.3.2.6.5.0-292-8.0 ... SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 11.947 s > [INFO] Finished at: 2019-09-03T16:52:59+02:00 > [INFO] > > [WARNING] The requested profile "vendor-repos" could not be activated because > it does not exist. > [ERROR] Failed to execute goal on project flink-shaded-hadoop-2: Could not > resolve dependencies for project > org.apache.flink:flink-shaded-hadoop-2:jar:2.7.3.2.6.5.0-292-8.0: The > following artifacts could not be resolved: > org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-292, > org.apache.hadoop:hadoop-hdfs:jar:2.7.3.2.6.5.0-292, > org.apache.hadoop:hadoop-mapreduce-client-core:jar:2.7.3.2.6.5.0-292, > org.apache.hadoop:hadoop-yarn-client:jar:2.7.3.2.6.5.0-292, > org.apache.hadoop:hadoop-yarn-common:jar:2.7.3.2.6.5.0-292: Failure to find > org.apache.hadoop:hadoop-common:jar:2.7.3.2.6.5.0-292 in > https://repo.maven.apache.org/maven2 was cached in the local repository, > resolution will not be reattempted until the update interval of central has > elapsed or updates are forced -> [Help 1] > [ERROR] > [ERROR] To see the full stack
[jira] [Updated] (FLINK-13944) Table.toAppendStream: InvalidProgramException: Table program cannot be compiled.
[ https://issues.apache.org/jira/browse/FLINK-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefano updated FLINK-13944: Environment: {code:bash} $ java -version openjdk version "1.8.0_222" OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10 OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode {code} {{--}} {code:bash} $ scala -version Scala code runner version 2.11.12 -- Copyright 2002-2017, LAMP/EPFL {code} {{--}} {{build.}}{{sbt}} [...] ThisBuild / scalaVersion := "2.11.12" val flinkVersion = "1.9.0" val flinkDependencies = Seq( "org.apache.flink" %% "flink-scala" % flinkVersion % "provided", "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided", "org.apache.flink" %% "flink-table-planner" % flinkVersion % "provided") [...] was: {{$ java -version}} {{ openjdk version "1.8.0_222"}} {{ OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10)}} {{ OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)}} {{--}} {{$ scala -version}} {{Scala code runner version 2.11.12 -- Copyright 2002-2017, LAMP/EPFL}} {{--}} {{build.}}{{sbt}} [...] ThisBuild / scalaVersion := "2.11.12" val flinkVersion = "1.9.0" val flinkDependencies = Seq( "org.apache.flink" %% "flink-scala" % flinkVersion % "provided", "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided", "org.apache.flink" %% "flink-table-planner" % flinkVersion % "provided") [...] > Table.toAppendStream: InvalidProgramException: Table program cannot be > compiled. > > > Key: FLINK-13944 > URL: https://issues.apache.org/jira/browse/FLINK-13944 > Project: Flink > Issue Type: Bug > Components: API / Scala, Table SQL / API >Affects Versions: 1.8.1, 1.9.0 > Environment: {code:bash} > $ java -version > openjdk version "1.8.0_222" > OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10 > OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode > {code} > {{--}} > {code:bash} > $ scala -version > Scala code runner version 2.11.12 -- Copyright 2002-2017, LAMP/EPFL > {code} > {{--}} > {{build.}}{{sbt}} > [...] > ThisBuild / scalaVersion := "2.11.12" > val flinkVersion = "1.9.0" > val flinkDependencies = Seq( > "org.apache.flink" %% "flink-scala" % flinkVersion % "provided", > "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided", > "org.apache.flink" %% "flink-table-planner" % flinkVersion % "provided") > [...] > >Reporter: Stefano >Priority: Major > Attachments: app.zip > > > (The project in which I face the error is attached.) > {{Using: Scala streaming API and the StreamTableEnvironment.}} > {{Given the classes:}} > {code:scala} > object EntityType extends Enumeration { > type EntityType = Value > val ACTIVITY = Value > } > sealed trait Entity extends Serializable > case class Activity(card_id: Long, date_time: Timestamp, second: Long, > station_id: Long, station_name: String, activity_code: Long, amount: Long) > extends Entity > {code} > What I try to do to convert a table after selection to an appendStream: > {code:scala} > /** activity table **/ > val activityDataStream = partialComputation1 > .filter(_._1 == EntityType.ACTIVITY) > .map(x => x._3.asInstanceOf[Activity]) > tableEnv.registerDataStream("activity", activityDataStream, 'card_id, > 'date_time, 'second, 'station_id, 'station_name, 'activity_code, 'amount) > val selectedTable = tableEnv.scan("activity").select("card_id, second") > selectedTable.printSchema() > // root > // |-- card_id: BIGINT > // |-- second: BIGINT > // ATTEMPT 1 > //val output = tableEnv.toAppendStream[(Long, Long)](selectedTable) > //output.print > // ATTEMPT 2 > //val output = tableEnv.toAppendStream[(java.lang.Long, > java.lang.Long)](selectedTable) > //output.print > // ATTEMPT 3 > //val output = tableEnv.toAppendStream[Row](selectedTable) > //output.print > // ATTEMPT 4 > case class Test(card_id: Long, second: Long) extends Entity > val output = tableEnv.toAppendStream[Test](selectedTable) > output.print > {code} > In any of the attempts the error I get is always the same: > {code:bash} > $ flink run target/scala-2.11/app-assembly-0.1.jar > Starting execution of program > root > |-- card_id: BIGINT > |-- second: BIGINT > > The program finished with the following exception: > org.apache.flink.client.program.ProgramInvocationException: Job failed. > (JobID: 9954823e0b55a8140f78be6868c85399) > at > org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262) > at > org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338) > at
[GitHub] [flink] TisonKun commented on issue #9509: [FLINK-13750][client] Separate client/cluster-side high-availability services
TisonKun commented on issue #9509: [FLINK-13750][client] Separate client/cluster-side high-availability services URL: https://github.com/apache/flink/pull/9509#issuecomment-527812143 @tillrohrmann I have opened a new pr #9609 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun opened a new pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun opened a new pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609 ## What is the purpose of the change As discussed in [FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750) we decide to separate client-side/cluster-side high-availability services, in order to avoid issues like FLINK-13500 which initialized blob store service without auth. cc @tillrohrmann @zentol ## Brief change log + Introduce a new interface `ClientHighAvailabilityServices ` and implementations `StandaloneClientHAServices` and `ZKClientHAServices`. This interface include only one method, `getWebMonitorLeaderRetriever`, since client is only supposed to communicate with WebMonitor. + Add a method `createClientHAServices` in `HighAvailabilityServicesFactory` for customized. By default it reuses `createHAServices` and `getWebMonitorLeaderRetriever`. The default behavior is for backward compatibility. + Add `createClientHAServices` util in `HighAvailabilityServicesUtils`, also `getWebMonitorAddress` util in `WebMonitorUtils`. + Remove unused constructor in `RestClusterClient` and `ClusterClient`. + Adjust existing code. ## Verifying this change This change added tests and can be verified as follows: + Functions should be covered by existing tests since we just separated components but no new features. ## Does this pull request potentially affect one of the following parts: - Dependencies (does it add or upgrade a dependency): (no) - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no) - The serializers: (no) - The runtime per-record code paths (performance sensitive): (no) - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (don't know, we introduce client ha service but actually it has nothing to do with recovery) - The S3 file system connector: (no) ## Documentation - Does this pull request introduce a new feature? (no) - If yes, how is the feature documented? (not applicable) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9591: [FLINK-13937][docs] Fix the error of the hive connector dependency ve…
flinkbot edited a comment on issue #9591: [FLINK-13937][docs] Fix the error of the hive connector dependency ve… URL: https://github.com/apache/flink/pull/9591#issuecomment-527020451 ## CI report: * 9f75a4f6da30d5e22fa8594a628ce1e937b5eb10 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125389706) * 6e3b8004605f8a9deb6adf5b7c5444bc73f3102e : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125858224) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] aljoscha commented on issue #9565: [FLINK-12501] Use SpecificRecord.getSchema in AvroFactory
aljoscha commented on issue #9565: [FLINK-12501] Use SpecificRecord.getSchema in AvroFactory URL: https://github.com/apache/flink/pull/9565#issuecomment-527830085 @debasishg We did some changes because I forgot to also adapt the `AvroSerializerSnapshot`. Could you again check if this works with your Avrohugger classes? Maybe also check whether you can successfully take a savepoint and restore, because that part was not previously covered by the changes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9565: [FLINK-12501] Use SpecificRecord.getSchema in AvroFactory
flinkbot edited a comment on issue #9565: [FLINK-12501] Use SpecificRecord.getSchema in AvroFactory URL: https://github.com/apache/flink/pull/9565#issuecomment-526522446 ## CI report: * 247e51b1cba1bed7eaa64798623865e13c2a8c8b : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/125200467) * 0533cc9722194964f630a7de8b9ffd7a2dac5809 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125202045) * 7182265d43e18df62e81e43fe5c690453212a8ab : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/125402684) * 697274cbc1dd1a008b5072fac794509d051b8111 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125567625) * f6e06c14234fd5f1ba65550c81cc50202d25937a : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125866968) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] zentol commented on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex
zentol commented on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex URL: https://github.com/apache/flink/pull/9601#issuecomment-527839667 This is rather unintuitive. The column key is "ID" but the values links to some log file. We need a better column key, and imo shouldn't directly link to the log file but instead the page of the host TE. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690917 ## File path: flink-clients/src/test/java/org/apache/flink/client/RemoteExecutorHostnameResolutionTest.java ## @@ -51,35 +50,34 @@ public static void check() { @Test public void testUnresolvableHostname1() throws Exception { - RemoteExecutor exec = new RemoteExecutor(nonExistingHostname, port); + try { exec.executePlan(getProgram()); fail("This should fail with an ProgramInvocationException"); - } - catch (UnknownHostException ignored) { - // that is what we want! + } catch (UnknownHostException ignored) { + // expected } } @Test public void testUnresolvableHostname2() throws Exception { - - InetSocketAddress add = new InetSocketAddress(nonExistingHostname, port); - RemoteExecutor exec = new RemoteExecutor(add, new Configuration(), - Collections.emptyList(), Collections.emptyList()); + RemoteExecutor exec = new RemoteExecutor( + new InetSocketAddress(nonExistingHostname, port), + new Configuration(), + Collections.emptyList(), + Collections.emptyList()); try { exec.executePlan(getProgram()); fail("This should fail with an ProgramInvocationException"); - } - catch (UnknownHostException ignored) { + } catch (UnknownHostException ignored) { // that is what we want! } } private static Plan getProgram() { ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); - env.fromElements(1, 2, 3).output(new DiscardingOutputFormat()); + env.fromElements(1, 2, 3).output(new DiscardingOutputFormat<>()); Review comment: Reverted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690369 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/MiniClusterClient.java ## @@ -51,14 +51,13 @@ private final MiniCluster miniCluster; public MiniClusterClient(@Nonnull Configuration configuration, @Nonnull MiniCluster miniCluster) { - super(configuration, miniCluster.getHighAvailabilityServices(), true); - + super(configuration); this.miniCluster = miniCluster; } @Override public void shutdown() throws Exception { - super.shutdown(); + // no op Review comment: Let's do it in a following issue that toward to an `ClusterClient` interface(instead of abstract class). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690636 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/rest/RestClusterClient.java ## @@ -149,29 +153,16 @@ public RestClusterClient(Configuration config, T clusterId) throws Exception { config, null, clusterId, - new ExponentialWaitStrategy(10L, 2000L), - null); - } - - public RestClusterClient( - Configuration config, - T clusterId, - LeaderRetrievalService webMonitorRetrievalService) throws Exception { - this( - config, - null, - clusterId, - new ExponentialWaitStrategy(10L, 2000L), - webMonitorRetrievalService); + new ExponentialWaitStrategy(10L, 2000L)); } @VisibleForTesting RestClusterClient( Configuration configuration, @Nullable RestClient restClient, T clusterId, - WaitStrategy waitStrategy, - @Nullable LeaderRetrievalService webMonitorRetrievalService) throws Exception { + WaitStrategy waitStrategy + ) throws Exception { Review comment: Re-formatted This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690501 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/MiniClusterClient.java ## @@ -157,15 +154,20 @@ public String stopWithSavepoint(JobID jobId, boolean advanceToEndOfEventTime, @N return MiniClusterId.INSTANCE; } - // == - // Legacy methods - // == - @Override public String getWebInterfaceURL() { return miniCluster.getRestAddress().toString(); } + @Override + public void shutDownCluster() { + try { + miniCluster.close(); + } catch (Exception e) { + ExceptionUtils.rethrow(e); + } + } Review comment: Reverted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690390 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/MiniClusterClient.java ## @@ -51,14 +51,13 @@ private final MiniCluster miniCluster; public MiniClusterClient(@Nonnull Configuration configuration, @Nonnull MiniCluster miniCluster) { - super(configuration, miniCluster.getHighAvailabilityServices(), true); - + super(configuration); this.miniCluster = miniCluster; } @Override public void shutdown() throws Exception { - super.shutdown(); + // no op Review comment: Reverted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320690432 ## File path: flink-clients/src/main/java/org/apache/flink/client/program/MiniClusterClient.java ## @@ -157,15 +154,20 @@ public String stopWithSavepoint(JobID jobId, boolean advanceToEndOfEventTime, @N return MiniClusterId.INSTANCE; } - // == - // Legacy methods - // == - @Override public String getWebInterfaceURL() { return miniCluster.getRestAddress().toString(); } + @Override + public void shutDownCluster() { + try { + miniCluster.close(); + } catch (Exception e) { + ExceptionUtils.rethrow(e); + } + } Review comment: Let's do it in a following issue that toward to an ClusterClient interface(instead of abstract class). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13909) LinkElement does not support different anchors required for localization
[ https://issues.apache.org/jira/browse/FLINK-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922373#comment-16922373 ] Jark Wu commented on FLINK-13909: - Hi [~Zentol], I'm fine with {{chineseDescription}}. I also proposed this in FLIP-54 design doc[1], but some people worried it may make the main code hard to maintain. [1]: https://docs.google.com/document/d/1IQ7nwXqmhCy900t2vQLEL3N2HIdMg-JO8vTzo1BtyKU/edit?disco=Darbx6g > LinkElement does not support different anchors required for localization > > > Key: FLINK-13909 > URL: https://issues.apache.org/jira/browse/FLINK-13909 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Major > > While addressing FLINK-13898 we realized that the {{LinkElement}} does not > support multiple anchors which are needed to support localisation. Due to the > translation into Chinese the anchors are not the same across Flink's English > and Chinese documentation. > Either we keep anchors the same in both versions or we have a way to support > multiple anchors, one for each localisation. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320694116 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/webmonitor/WebMonitorUtils.java ## @@ -142,6 +149,33 @@ private static File resolveFileLocation(String logFilePath) { } } + /** +* Get address of web monitor from configuration. +* +* @param configuration Configuration contains those for WebMonitor. +* @param resolution Whether to try address resolution of the given hostname or not. +* This allows to fail fast in case that the hostname cannot be resolved. +* @return Address of WebMonitor. +*/ + public static String getWebMonitorAddress( + Configuration configuration, + HighAvailabilityServicesUtils.AddressResolution resolution + ) throws UnknownHostException { Review comment: Re-formatted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320693666 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/zookeeper/ZKClientHAServices.java ## @@ -0,0 +1,57 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.highavailability.zookeeper; + +import org.apache.flink.configuration.Configuration; +import org.apache.flink.runtime.highavailability.ClientHighAvailabilityServices; +import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService; +import org.apache.flink.runtime.util.ZooKeeperUtils; + +import org.apache.curator.framework.CuratorFramework; + +import javax.annotation.Nonnull; + +/** + * ZooKeeper based implementation for {@link ClientHighAvailabilityServices}. + */ +public class ZKClientHAServices implements ClientHighAvailabilityServices { + + private static final String REST_SERVER_LEADER_PATH = "/rest_server_lock"; + + private final CuratorFramework client; + private final Configuration configuration; + + public ZKClientHAServices( + @Nonnull CuratorFramework client, + @Nonnull Configuration configuration + ) { Review comment: Re-formatted This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Comment Edited] (FLINK-12481) Make processing time timer trigger run via the mailbox
[ https://issues.apache.org/jira/browse/FLINK-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922372#comment-16922372 ] Alex edited comment on FLINK-12481 at 9/4/19 10:48 AM: --- PR: https://github.com/apache/flink/pull/9564 was (Author: 1u0): PR; https://github.com/apache/flink/pull/9564 > Make processing time timer trigger run via the mailbox > -- > > Key: FLINK-12481 > URL: https://issues.apache.org/jira/browse/FLINK-12481 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: Stefan Richter >Assignee: Alex >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > This sub-task integrates the mailbox with processing time timer triggering. > Those triggers should now be enqueued as mailbox events and picked up by the > stream task's main thread for processing. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-12481) Make processing time timer trigger run via the mailbox
[ https://issues.apache.org/jira/browse/FLINK-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922372#comment-16922372 ] Alex commented on FLINK-12481: -- PR; https://github.com/apache/flink/pull/9564 > Make processing time timer trigger run via the mailbox > -- > > Key: FLINK-12481 > URL: https://issues.apache.org/jira/browse/FLINK-12481 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Task >Reporter: Stefan Richter >Assignee: Alex >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > This sub-task integrates the mailbox with processing time timer triggering. > Those triggers should now be enqueued as mailbox events and picked up by the > stream task's main thread for processing. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320693818 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/webmonitor/WebMonitorUtils.java ## @@ -142,6 +149,33 @@ private static File resolveFileLocation(String logFilePath) { } } + /** +* Get address of web monitor from configuration. +* +* @param configuration Configuration contains those for WebMonitor. +* @param resolution Whether to try address resolution of the given hostname or not. +* This allows to fail fast in case that the hostname cannot be resolved. +* @return Address of WebMonitor. +*/ + public static String getWebMonitorAddress( Review comment: Make sense. Moved. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9564: [FLINK-12481][FLINK-12482][FLINK-12958] Streaming runtime: integrate mailbox for timer triggers, checkpoints and AsyncWaitOperator
flinkbot edited a comment on issue #9564: [FLINK-12481][FLINK-12482][FLINK-12958] Streaming runtime: integrate mailbox for timer triggers, checkpoints and AsyncWaitOperator URL: https://github.com/apache/flink/pull/9564#issuecomment-526487220 ## CI report: * 3c6c3d69983136fb8bfa11b28a7ba783dfe61e52 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125188905) * a9d65d8fabd938d47abd801a4686cb185f2fcf68 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125206763) * 1bbbce7f1f81598cb5a454f0e4d25adae6ed75b4 : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/125425831) * d68aa63cea98320f14ff9bbe483776c8873a77ca : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125431268) * 097e1ec9e9e64ad6e9e1de16bc2937acb84ab42e : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/125726843) * 9bc0b14a50cf03c23eb4203aa2637130572047f9 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125739773) * ee6669575eb9481fd7181c3a9be52e615f613e8f : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125755445) * 712fdb428e78fc0d47262c6c755429a37a6be3eb : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125860362) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13838) Support -yta(--yarnshipArchives) arguments in flink run command line
[ https://issues.apache.org/jira/browse/FLINK-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922396#comment-16922396 ] Yang Wang commented on FLINK-13838: --- The use case is some users already have a tar.gz and want to ship it to all taskmanagers with specified name. So we do not implement the archive logic in the flink client side. It should be done by users outsides of flink. > Support -yta(--yarnshipArchives) arguments in flink run command line > > > Key: FLINK-13838 > URL: https://issues.apache.org/jira/browse/FLINK-13838 > Project: Flink > Issue Type: New Feature > Components: Command Line Client >Reporter: Yang Wang >Priority: Major > > Currently we could use --yarnship to transfer jars, files and directory for > cluster and add them to classpath. However, compressed package could not be > supported. If we have a compressed package including some config files, so > files and jars, the --yarnshipArchives will be very useful. > > What’s the difference between -yt and -yta? > -yt [file:///tmp/a.tar.gz] The file will be transferred by Yarn and keep the > original compressed file(not be unpacked) in the workdir of > jobmanager/taskmanager container. > -yta [file:///tmp/a.tar.gz#dict1] The file will be transferred by Yarn and > unpacked to a new directory with name dict1 in the workdir. > > -yta,--yarnshipArchives Ship archives for cluster (t for > transfer), Use ',' to separate > multiple files. The archives could > be > in local file system or distributed > file system. Use URI schema to > specify > which file system the file belongs. > If > schema is missing, would try to get > the archives in local file system. > Use > '#' after the file path to specify a > new name in workdir. (eg: -yta > > file:///tmp/a.tar.gz#dict1,hdfs:///$na > menode_address/tmp/b.tar.gz) -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13949) Delete deduplicating JobVertexDetailsInfo.VertexTaskDetail
[ https://issues.apache.org/jira/browse/FLINK-13949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922266#comment-16922266 ] Chesnay Schepler commented on FLINK-13949: -- It's backwards compatible so long as no existing field is changed or removed. > Delete deduplicating JobVertexDetailsInfo.VertexTaskDetail > -- > > Key: FLINK-13949 > URL: https://issues.apache.org/jira/browse/FLINK-13949 > Project: Flink > Issue Type: Improvement > Components: Runtime / REST >Reporter: lining >Assignee: lining >Priority: Major > > As there is SubtaskExecutionAttemptDetailsInfo for subtask, so we can use it > replace JobVertexDetailsInfo.VertexTaskDetail. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13909) LinkElement does not support different anchors required for localization
[ https://issues.apache.org/jira/browse/FLINK-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922299#comment-16922299 ] Zhu Zhu commented on FLINK-13909: - Here's my thoughts on docs/configs localisation. We should have a key for each statement in docs and configs. And then we can have different translations for those statements, maintained in different localisation files. To simplify the key definition, 1. Doc statement key can be defined in base docs with some special marker, e.g. {{_{ @statement XXX } base statement \{ @endstatement }_}} 2. Config statement key can be defined using its config key with a special prefix, e.g. {{_CONFIG#jobmanager.execution.failover-strategy_}} Heading's anchor, in this way, can be the head statement key, thus to be the same in different localisation docs. > LinkElement does not support different anchors required for localization > > > Key: FLINK-13909 > URL: https://issues.apache.org/jira/browse/FLINK-13909 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Major > > While addressing FLINK-13898 we realized that the {{LinkElement}} does not > support multiple anchors which are needed to support localisation. Due to the > translation into Chinese the anchors are not the same across Flink's English > and Chinese documentation. > Either we keep anchors the same in both versions or we have a way to support > multiple anchors, one for each localisation. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Comment Edited] (FLINK-13937) Fix the error of the hive connector dependency version
[ https://issues.apache.org/jira/browse/FLINK-13937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922316#comment-16922316 ] Jeff Yang edited comment on FLINK-13937 at 9/4/19 9:21 AM: --- Thanks [~jark],I had changed the doc ,please take a look again. was (Author: highfei2...@126.com): Thanks ,I had changed the doc ,please take a look again. > Fix the error of the hive connector dependency version > > > Key: FLINK-13937 > URL: https://issues.apache.org/jira/browse/FLINK-13937 > Project: Flink > Issue Type: Task > Components: Documentation >Affects Versions: 1.10.0 >Reporter: Jeff Yang >Assignee: Jeff Yang >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0, 1.9.1 > > Time Spent: 10m > Remaining Estimate: 0h > > There is a wrong maven dependency in the hive connector's > [documentation|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/]. > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13937) Fix the error of the hive connector dependency version
[ https://issues.apache.org/jira/browse/FLINK-13937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922316#comment-16922316 ] Jeff Yang commented on FLINK-13937: --- Thanks ,I had changed the doc ,please take a look again. > Fix the error of the hive connector dependency version > > > Key: FLINK-13937 > URL: https://issues.apache.org/jira/browse/FLINK-13937 > Project: Flink > Issue Type: Task > Components: Documentation >Affects Versions: 1.10.0 >Reporter: Jeff Yang >Assignee: Jeff Yang >Priority: Major > Labels: pull-request-available > Fix For: 1.10.0, 1.9.1 > > Time Spent: 10m > Remaining Estimate: 0h > > There is a wrong maven dependency in the hive connector's > [documentation|https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/]. > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot commented on issue #9611: [FLINK-13936][licensing] Update NOTICE-binary
flinkbot commented on issue #9611: [FLINK-13936][licensing] Update NOTICE-binary URL: https://github.com/apache/flink/pull/9611#issuecomment-527836274 Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community to review your pull request. We will use this comment to track the progress of the review. ## Automated Checks Last check on commit 553d10ef60bd6fc9019d4fa54fdf202c519ef8b5 (Wed Sep 04 10:09:46 UTC 2019) **Warnings:** * No documentation files were touched! Remember to keep the Flink docs up to date! Mention the bot in a comment to re-run the automated checks. ## Review Progress * ❓ 1. The [description] looks good. * ❓ 2. There is [consensus] that the contribution should go into to Flink. * ❓ 3. Needs [attention] from. * ❓ 4. The change fits into the overall [architecture]. * ❓ 5. Overall code [quality] is good. Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process. The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands The @flinkbot bot supports the following commands: - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`) - `@flinkbot approve all` to approve all aspects - `@flinkbot approve-until architecture` to approve everything until `architecture` - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention - `@flinkbot disapprove architecture` to remove an approval you gave earlier This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13953) Facilitate enabling new Scheduler in MiniCluster Tests
[ https://issues.apache.org/jira/browse/FLINK-13953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Yao updated FLINK-13953: - Description: Currently, tests using the {{MiniCluster}} use the legacy scheduler by default. Once the new scheduler is implemented, we should run tests with the new scheduler enabled. However, it is not expected that all tests will pass immediately. Therefore, it should be possible to enable the new scheduler for a subset of tests. *Acceptance Criteria* * Subset of tests using {{MiniCluster}} can be run on a per-commit basis (on Travis) against new scheduler and also legacy scheduler was: Currently, tests using the {{MiniCluster}} use the legacy scheduler by default. Once the new scheduler is implemented, we should run tests with the new scheduler enabled. However, it is not expected that all tests will pass immediately. Therefore, it should be possible to enable the new scheduler for a subset of tests. *Acceptance Criteria* * Tests using {{MiniCluster}} are run on a per-commit basis (on Travis) against new scheduler and also legacy scheduler > Facilitate enabling new Scheduler in MiniCluster Tests > -- > > Key: FLINK-13953 > URL: https://issues.apache.org/jira/browse/FLINK-13953 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination, Tests >Reporter: Gary Yao >Assignee: Gary Yao >Priority: Major > > Currently, tests using the {{MiniCluster}} use the legacy scheduler by > default. Once the new scheduler is implemented, we should run tests with the > new scheduler enabled. However, it is not expected that all tests will pass > immediately. Therefore, it should be possible to enable the new scheduler for > a subset of tests. > *Acceptance Criteria* > * Subset of tests using {{MiniCluster}} can be run on a per-commit basis (on > Travis) against new scheduler and also legacy scheduler -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] jinglining commented on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api
jinglining commented on issue #9555: [FLINK-13868][REST] Job vertex add taskmanager id in rest api URL: https://github.com/apache/flink/pull/9555#issuecomment-527858233 > Overall this looks fine, but we're mixing 2 separate changes in a single commit (addition of new field, and deduplicating classes). I have fixed it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] tillrohrmann commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
tillrohrmann commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320708752 ## File path: flink-runtime/src/main/java/org/apache/flink/runtime/highavailability/ClientHighAvailabilityServices.java ## @@ -0,0 +1,37 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.runtime.highavailability; + +import org.apache.flink.runtime.leaderretrieval.LeaderRetrievalService; + +/** + * {@code ClientHighAvailabilityServices} provides services those are required + * in client-side. At the moment only web monitor leader retriever is required + * because all requests from client are received and propagated by web monitor. + */ +public interface ClientHighAvailabilityServices extends AutoCloseable { + + /** +* Get the leader retriever for the web monitor. +* +* @return the leader retriever for the web monitor. +*/ + LeaderRetrievalService getWebMonitorLeaderRetriever(); Review comment: Technically speaking it should be the `RestEndpoint` because `WebMonitor` denotes for me more the web UI. It just happens to be the case that our `WebMonitor` runs on the same netty server as the cluster `RestEndpoint`. Hence I would be in favour of calling it `getRestEndpointLeaderRetriever` or `getClusterRestEndpointLeaderRetriever`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
TisonKun commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320710596 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesUtilsTest.java ## @@ -59,6 +59,22 @@ public void testCreateCustomHAServices() throws Exception { assertSame(haServices, actualHaServices); } + @Test + public void testCreateCustomClientHAServices() throws Exception { + Configuration config = new Configuration(); + + ClientHighAvailabilityServices clientHAServices = Mockito.mock(ClientHighAvailabilityServices.class); Review comment: Is it reasonable that we remove all Mockito usage in this test case in pr? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] tillrohrmann commented on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
tillrohrmann commented on issue #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#issuecomment-527862874 Thanks @TisonKun for the quick update. I had one more comment concerning the usage of `Mockito` and the naming of the factory in the `ClientHighAvailabilityServices` which I would name `getClusterRestEndpointLeaderRetriever` or `getRestEndpointLeaderRetriever`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex
flinkbot edited a comment on issue #9601: [FLINK-13894][web]Web Ui add log url for subtask of vertex URL: https://github.com/apache/flink/pull/9601#issuecomment-527404637 ## CI report: * 13d895349390698404a375fb4362a41b736ab0c6 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125577621) * 5a1866be2d9765b0e941a531d5e9dda8616fecf4 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13838) Support -yta(--yarnshipArchives) arguments in flink run command line
[ https://issues.apache.org/jira/browse/FLINK-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1697#comment-1697 ] Chesnay Schepler commented on FLINK-13838: -- In other words this is _not_ a yarn feature, since we have to implement the "archive this thing"-logic we implement ourselves? > Support -yta(--yarnshipArchives) arguments in flink run command line > > > Key: FLINK-13838 > URL: https://issues.apache.org/jira/browse/FLINK-13838 > Project: Flink > Issue Type: New Feature > Components: Command Line Client >Reporter: Yang Wang >Priority: Major > > Currently we could use --yarnship to transfer jars, files and directory for > cluster and add them to classpath. However, compressed package could not be > supported. If we have a compressed package including some config files, so > files and jars, the --yarnshipArchives will be very useful. > > What’s the difference between -yt and -yta? > -yt [file:///tmp/a.tar.gz] The file will be transferred by Yarn and keep the > original compressed file(not be unpacked) in the workdir of > jobmanager/taskmanager container. > -yta [file:///tmp/a.tar.gz#dict1] The file will be transferred by Yarn and > unpacked to a new directory with name dict1 in the workdir. > > -yta,--yarnshipArchives Ship archives for cluster (t for > transfer), Use ',' to separate > multiple files. The archives could > be > in local file system or distributed > file system. Use URI schema to > specify > which file system the file belongs. > If > schema is missing, would try to get > the archives in local file system. > Use > '#' after the file path to specify a > new name in workdir. (eg: -yta > > file:///tmp/a.tar.gz#dict1,hdfs:///$na > menode_address/tmp/b.tar.gz) -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13909) LinkElement does not support different anchors required for localization
[ https://issues.apache.org/jira/browse/FLINK-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922273#comment-16922273 ] Chesnay Schepler commented on FLINK-13909: -- For descriptions, having yaml files for that seems like a rather complex solution to me; I was more thinking to introduce a `chineseDescription` method to the ConfigOption. > LinkElement does not support different anchors required for localization > > > Key: FLINK-13909 > URL: https://issues.apache.org/jira/browse/FLINK-13909 > Project: Flink > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.9.0, 1.10.0 >Reporter: Till Rohrmann >Priority: Major > > While addressing FLINK-13898 we realized that the {{LinkElement}} does not > support multiple anchors which are needed to support localisation. Due to the > translation into Chinese the anchors are not the same across Flink's English > and Chinese documentation. > Either we keep anchors the same in both versions or we have a way to support > multiple anchors, one for each localisation. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot edited a comment on issue #9607: [FLINK-13946] Remove job session related code from ExecutionEnvironment
flinkbot edited a comment on issue #9607: [FLINK-13946] Remove job session related code from ExecutionEnvironment URL: https://github.com/apache/flink/pull/9607#issuecomment-527790327 ## CI report: * 0c2c5cdcc1267a49ef8df775e0e2ee46a5249487 : PENDING [Build](https://travis-ci.com/flink-ci/flink/builds/125850583) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] xuefuz commented on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x
xuefuz commented on issue #9580: [FLINK-13930][hive] Support Hive version 3.1.x URL: https://github.com/apache/flink/pull/9580#issuecomment-527805363 Rebasing to latest master solved the python build issue, @bowenli86 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-4399) Add support for oversized messages
[ https://issues.apache.org/jira/browse/FLINK-4399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922307#comment-16922307 ] Biao Liu commented on FLINK-4399: - Ah, yes, I mean "it's not a critical issue". > Add support for oversized messages > -- > > Key: FLINK-4399 > URL: https://issues.apache.org/jira/browse/FLINK-4399 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination > Environment: FLIP-6 feature branch >Reporter: Stephan Ewen >Assignee: Biao Liu >Priority: Major > Labels: flip-6 > > Currently, messages larger than the maximum Akka Framesize cause an error > when being transported. We should add a way to pass messages that are larger > than the Framesize, as may happen for: > - {{collect()}} calls that collect large data sets (via accumulators) > - Job submissions and operator deployments where the functions closures are > large (for example because it contains large pre-loaded data) > - Function restore in cases where restored state is larger than > checkpointed state (union state) > I suggest to use the {{BlobManager}} to transfer large payload. > - On the sender side, oversized messages are stored under a transient blob > (which is deleted after first retrieval, or after a certain number of minutes) > - The sender sends a "pointer to blob message" instead. > - The receiver grabs the message from the blob upon receiving the pointer > message > The RPC Service should be optionally initializable with a "large message > handler" which is internally the {{BlobManager}}. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] JingsongLi commented on issue #9608: [FLINK-13776][table] Introduce new interfaces to BuiltInFunctionDefinition
JingsongLi commented on issue #9608: [FLINK-13776][table] Introduce new interfaces to BuiltInFunctionDefinition URL: https://github.com/apache/flink/pull/9608#issuecomment-527813238 @twalthr @dawidwys Can you take a look when you are free? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9607: [FLINK-13946] Remove job session related code from ExecutionEnvironment
flinkbot edited a comment on issue #9607: [FLINK-13946] Remove job session related code from ExecutionEnvironment URL: https://github.com/apache/flink/pull/9607#issuecomment-527790327 ## CI report: * 0c2c5cdcc1267a49ef8df775e0e2ee46a5249487 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125850583) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13946) Remove deactivated JobSession-related code.
[ https://issues.apache.org/jira/browse/FLINK-13946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922332#comment-16922332 ] Kostas Kloudas commented on FLINK-13946: Thanks a lot [~Tison]! Feel free to have a look at the open PR :) > Remove deactivated JobSession-related code. > --- > > Key: FLINK-13946 > URL: https://issues.apache.org/jira/browse/FLINK-13946 > Project: Flink > Issue Type: Improvement > Components: Client / Job Submission >Affects Versions: 1.9.0 >Reporter: Kostas Kloudas >Assignee: Kostas Kloudas >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > This issue refers to removing the code related to job session as described in > [FLINK-2097|https://issues.apache.org/jira/browse/FLINK-2097]. The feature > is deactivated, as pointed by the comment > [here|https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/ExecutionEnvironment.java#L285] > and it complicates the code paths related to job submission, namely the > lifecycle of the Remote and LocalExecutors. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Assigned] (FLINK-13954) Clean up ExecutionEnvironment / JobSubmission code paths
[ https://issues.apache.org/jira/browse/FLINK-13954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aljoscha Krettek reassigned FLINK-13954: Assignee: Kostas Kloudas > Clean up ExecutionEnvironment / JobSubmission code paths > > > Key: FLINK-13954 > URL: https://issues.apache.org/jira/browse/FLINK-13954 > Project: Flink > Issue Type: Improvement > Components: Client / Job Submission >Affects Versions: 1.9.0 >Reporter: Kostas Kloudas >Assignee: Kostas Kloudas >Priority: Major > > This is an umbrella issue to serve as a hub for all issues related to job > submission / (stream) execution environment refactoring. > > This issue does not change any existing functionality, but it targets to > clean up / rearrange the code in the relevant components so that further > changes are easier to apply. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] zentol opened a new pull request #9610: [FLINK-13936][licensing] Update NOTICE-binary
zentol opened a new pull request #9610: [FLINK-13936][licensing] Update NOTICE-binary URL: https://github.com/apache/flink/pull/9610 Regenerate NOTICE-binary to reflect the latest state. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #9565: [FLINK-12501] Use SpecificRecord.getSchema in AvroFactory
flinkbot edited a comment on issue #9565: [FLINK-12501] Use SpecificRecord.getSchema in AvroFactory URL: https://github.com/apache/flink/pull/9565#issuecomment-526522446 ## CI report: * 247e51b1cba1bed7eaa64798623865e13c2a8c8b : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/125200467) * 0533cc9722194964f630a7de8b9ffd7a2dac5809 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/125202045) * 7182265d43e18df62e81e43fe5c690453212a8ab : CANCELED [Build](https://travis-ci.com/flink-ci/flink/builds/125402684) * 697274cbc1dd1a008b5072fac794509d051b8111 : SUCCESS [Build](https://travis-ci.com/flink-ci/flink/builds/125567625) * f6e06c14234fd5f1ba65550c81cc50202d25937a : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (FLINK-13936) NOTICE-binary is outdated
[ https://issues.apache.org/jira/browse/FLINK-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated FLINK-13936: --- Labels: pull-request-available (was: ) > NOTICE-binary is outdated > - > > Key: FLINK-13936 > URL: https://issues.apache.org/jira/browse/FLINK-13936 > Project: Flink > Issue Type: Bug > Components: Build System >Affects Versions: 1.9.0, 1.10.0 >Reporter: Chesnay Schepler >Assignee: Chesnay Schepler >Priority: Blocker > Labels: pull-request-available > Fix For: 1.10.0, 1.9.1 > > > The NOTICE-binary wasn't updated for the click-event example, the state > processing API and changes to the table API packaging. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[GitHub] [flink] flinkbot commented on issue #9611: [FLINK-13936][licensing] Update NOTICE-binary
flinkbot commented on issue #9611: [FLINK-13936][licensing] Update NOTICE-binary URL: https://github.com/apache/flink/pull/9611#issuecomment-527838217 ## CI report: * 553d10ef60bd6fc9019d4fa54fdf202c519ef8b5 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot commented on issue #9610: [FLINK-13936][licensing] Update NOTICE-binary
flinkbot commented on issue #9610: [FLINK-13936][licensing] Update NOTICE-binary URL: https://github.com/apache/flink/pull/9610#issuecomment-527838177 ## CI report: * 755a869c93208b7d95a85d3fe4f0027ca55eff2e : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] flinkbot edited a comment on issue #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner
flinkbot edited a comment on issue #8859: [FLINK-12905][table-planner] Enable querying CatalogViews in legacy planner URL: https://github.com/apache/flink/pull/8859#issuecomment-518729517 ## CI report: * c2adafa7ee87b18ba6af0b5f518251150a8da386 : FAILURE [Build](https://travis-ci.com/flink-ci/flink/builds/122142106) * 18f834bbe113eb88826fc04cdc017a856e10d3d0 : UNKNOWN This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] tillrohrmann commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side
tillrohrmann commented on a change in pull request #9609: [FLINK-13750][client][coordination] Separate HA services between client-side and server-side URL: https://github.com/apache/flink/pull/9609#discussion_r320709782 ## File path: flink-runtime/src/test/java/org/apache/flink/runtime/highavailability/HighAvailabilityServicesUtilsTest.java ## @@ -59,6 +59,22 @@ public void testCreateCustomHAServices() throws Exception { assertSame(haServices, actualHaServices); } + @Test + public void testCreateCustomClientHAServices() throws Exception { + Configuration config = new Configuration(); + + ClientHighAvailabilityServices clientHAServices = Mockito.mock(ClientHighAvailabilityServices.class); Review comment: I know that this test class already uses `Mockito`. But I would like to avoid adding more Mockito mocks. Instead we could have a very simple `TestingClientHighAvailabilityServices` implementation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (FLINK-13481) allow user launch job on yarn from SQL Client command line
[ https://issues.apache.org/jira/browse/FLINK-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922216#comment-16922216 ] Jeff Zhang commented on FLINK-13481: It is still in discussion in community > allow user launch job on yarn from SQL Client command line > -- > > Key: FLINK-13481 > URL: https://issues.apache.org/jira/browse/FLINK-13481 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Client >Affects Versions: 1.10.0 > Environment: Flink 1.10 > CDH 5.13.3 > > >Reporter: Hongtao Zhang >Priority: Critical > Fix For: 1.10.0 > > > Flink SQL Client active command line doesn't load the FlinkYarnSessionCli > general options > the general options contains "addressOption" which user can specify > --jobmanager="yarn-cluster" or -m to run the SQL on YARN Cluster > -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (FLINK-13925) ClassLoader in BlobLibraryCacheManager is not using context class loader
[ https://issues.apache.org/jira/browse/FLINK-13925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922242#comment-16922242 ] Jan Lukavský commented on FLINK-13925: -- Hi [~jark], it's fine, there seem to be more discussion on how to solve this, as the proposed solution seems to have some issues. > ClassLoader in BlobLibraryCacheManager is not using context class loader > > > Key: FLINK-13925 > URL: https://issues.apache.org/jira/browse/FLINK-13925 > Project: Flink > Issue Type: Bug >Affects Versions: 1.8.1, 1.9.0 >Reporter: Jan Lukavský >Priority: Major > Labels: pull-request-available > Fix For: 1.9.1, 1.8.3 > > Time Spent: 10m > Remaining Estimate: 0h > > Use thread's current context classloader as parent class loader of flink user > code class loaders. -- This message was sent by Atlassian Jira (v8.3.2#803003)