[jira] [Commented] (FLINK-15579) Support UpsertStreamTableSink on Blink batch mode

2020-01-15 Thread Shu Li Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015715#comment-17015715
 ] 

Shu Li Zheng commented on FLINK-15579:
--

[~jark] Thank you for the suggestion, In other words,  BatchExecSink. 
translateToPlanInternal() and StreamExecSink.translateToPlanInternal() should 
have exactly same logic now. right?

> Support UpsertStreamTableSink on Blink batch mode
> -
>
> Key: FLINK-15579
> URL: https://issues.apache.org/jira/browse/FLINK-15579
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.9.1
>Reporter: Shu Li Zheng
>Assignee: Shu Li Zheng
>Priority: Major
>
> Because JDBCTableSourceSinkFactory.createStreamTableSink() create 
> JDBCUpsertTableSink. But BatchExecSink can not work with 
> UpsertStreamTableSink.
> {code:scala}
>   override protected def translateToPlanInternal(
>   planner: BatchPlanner): Transformation[Any] = {
> val resultTransformation = sink match {
>   case _: RetractStreamTableSink[T] | _: UpsertStreamTableSink[T] =>
> throw new TableException("RetractStreamTableSink and 
> UpsertStreamTableSink is not" +
>   " supported in Batch environment.")
> {code}
> DDL like:
> CREATE TABLE USER_RESULT(
> NAME VARCHAR,
> CITY VARCHAR,
> SCORE BIGINT
> ) WITH (
> 'connector.type' = 'jdbc',
> 'connector.url' = '',
> 'connector.table' = '',
> 'connector.driver' = 'com.mysql.cj.jdbc.Driver',
> 'connector.username' = 'root',
> 'connector.password' = '',
> 'connector.write.flush.interval' = '1s')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys commented on issue #10829: [FLINK-14853][docs] Use higher granularity units in generated docs for Duration & MemorySize if possible

2020-01-15 Thread GitBox
dawidwys commented on issue #10829: [FLINK-14853][docs] Use higher granularity 
units in generated docs for Duration & MemorySize if possible
URL: https://github.com/apache/flink/pull/10829#issuecomment-574544636
 
 
   @xintongsong what is your comment to what @tillrohrmann said? Would you be 
ok with merging this PR first as it prints exact values and later adjusting 
your #10785 ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on issue #10829: [FLINK-14853][docs] Use higher granularity units in generated docs for Duration & MemorySize if possible

2020-01-15 Thread GitBox
xintongsong commented on issue #10829: [FLINK-14853][docs] Use higher 
granularity units in generated docs for Duration & MemorySize if possible
URL: https://github.com/apache/flink/pull/10829#issuecomment-574548126
 
 
   @dawidwys Thanks for asking. What @tillrohrmann said sounds good to me.
   
   My only concern was that, as @tillrohrmann mentioned, it is more difficult 
to make sure always log MemorySize with some method other than `toString`. If 
we consider the approximate value as a special format that one should 
explicitly consent to, then it makes sense to me and I do not have other 
objections.
   
   +1 for merging this PR as MemorySize prints exact values in `toString`. I'll 
adjust mine #10785.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10829: [FLINK-14853][docs] Use higher granularity units in generated docs for Duration & MemorySize if possible

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10829: [FLINK-14853][docs] Use higher 
granularity units in generated docs for Duration & MemorySize if possible
URL: https://github.com/apache/flink/pull/10829#issuecomment-573052559
 
 
   
   ## CI report:
   
   * c289f16e336e54931105f5c3ec143f8a9fd69021 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143899744) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4259)
 
   * 3ca46b8869aa81fcb4df59a6cd45b6d3b16d7480 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce synchronous registration of Execution#producedPartitions

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10832: [FLINK-14163][runtime]Enforce 
synchronous registration of Execution#producedPartitions
URL: https://github.com/apache/flink/pull/10832#issuecomment-573276729
 
 
   
   ## CI report:
   
   * 5eb1599945f1bee35c342f762cc25684013d2d83 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143984465) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4264)
 
   * 73579bbc6b556fe42e55e21581c5994f84480843 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144033919) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4267)
 
   * e303d84258f1e63edd45867f6c4e89e181307ee5 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144299177) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4325)
 
   * 426112c59a86ec127040178efda1085231f1988f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144326556) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4334)
 
   * fe731e4a039fba606be83a9535841c157a048d58 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144458002) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4355)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann commented on a change in pull request #10754: [FLINK-14091][coordination] Allow updates to connection state when ZKCheckpointIDCounter reconnects to ZK

2020-01-15 Thread GitBox
tillrohrmann commented on a change in pull request #10754: 
[FLINK-14091][coordination] Allow updates to connection state when 
ZKCheckpointIDCounter reconnects to ZK
URL: https://github.com/apache/flink/pull/10754#discussion_r366747111
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCheckpointIDCounter.java
 ##
 @@ -77,14 +84,26 @@ public ZooKeeperCheckpointIDCounter(CuratorFramework 
client, String counterPath)
this.client = checkNotNull(client, "Curator client");
this.counterPath = checkNotNull(counterPath, "Counter path");
this.sharedCount = new SharedCount(client, counterPath, 1);
+
+   this.connectionStateListeners = new ArrayList<>();
+   this.connectionStateListeners.add((ignore, newState) -> 
lastState = newState);
+   }
+
+   @VisibleForTesting
+   ZooKeeperCheckpointIDCounter(CuratorFramework client, String 
counterPath, Collection listeners) {
+   this(client, counterPath);
+   this.connectionStateListeners.addAll(listeners);
}
 
@Override
public void start() throws Exception {
synchronized (startStopLock) {
if (!isStarted) {
sharedCount.start();
-   
client.getConnectionStateListenable().addListener(connStateListener);
+
+   for (ConnectionStateListener listener : 
connectionStateListeners) {
+   
client.getConnectionStateListenable().addListener(listener);
 
 Review comment:
   I also thought about this but I think the other approach is easier to 
understand.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] tillrohrmann closed pull request #10754: [FLINK-14091][coordination] Allow updates to connection state when ZKCheckpointIDCounter reconnects to ZK

2020-01-15 Thread GitBox
tillrohrmann closed pull request #10754: [FLINK-14091][coordination] Allow 
updates to connection state when ZKCheckpointIDCounter reconnects to ZK
URL: https://github.com/apache/flink/pull/10754
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-14091) Job can not trigger checkpoint forever after zookeeper change leader

2020-01-15 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-14091.
---
Resolution: Fixed

Fixed via

master:
4b98956f968cb7abf83673e570262f439ca99fe9
25b169744d348afa9d7deac98fa7ab3592343b32
7a0fa1e09979f91f6787c63db2af8143faa8e973
7455a0946ef80ea45f0e79116f99c2812cb6aa5f

1.10.0:
7181254cb45be275039b47db14ac8ff1c030577e
7f27bb6ae139e8628230e5caaaf7b2550c2d4490
7fcda36fe58a28891c4104ec9926b1bf281e7c49
4b78a4e41a138820e0a07ccdc056729180aa7dd6

> Job can not trigger checkpoint forever after zookeeper change leader 
> -
>
> Key: FLINK-14091
> URL: https://issues.apache.org/jira/browse/FLINK-14091
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.0
>Reporter: Peng Wang
>Assignee: Zili Chen
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> when zk change leader, the state of curator is suspended,job manager can not 
> tigger checkpoint.but it doesn't tigger checkpoint after zk resume.
> we found that the lastState in the class ZooKeeperCheckpointIDCounter  never 
> change back to normal when it fall into SUSPENDED or LOST.
> h6. _/**_
>  _* Connection state listener. In case of \{@link ConnectionState#SUSPENDED} 
> or {@link_
>  _* ConnectionState#LOST} we are not guaranteed to read a current count from 
> ZooKeeper._
>  _*/_
> _private static class SharedCountConnectionStateListener implements 
> ConnectionStateListener {_
>  _private volatile ConnectionState lastState;_
>  _@Override_
>  _public void stateChanged(CuratorFramework client, ConnectionState newState) 
> {_
>  _if (newState == ConnectionState.SUSPENDED || newState == 
> ConnectionState.LOST) {_
>  _lastState = newState;_
>  _}_
>  _}_
>  _private ConnectionState getLastState() {_
>  _return lastState;_
>  _}_
> _}_
>  
> we change the state back. after test, solve the problem.
>  
> h6. _/**_
>  _* Connection state listener. In case of \{@link ConnectionState#SUSPENDED} 
> or {@link_
>  _* ConnectionState#LOST} we are not guaranteed to read a current count from 
> ZooKeeper._
>  _*/_
> _private static class SharedCountConnectionStateListener implements 
> ConnectionStateListener {_
>  _private volatile ConnectionState lastState;_
>  _@Override_
>  _public void stateChanged(CuratorFramework client, ConnectionState newState) 
> {_
>  _if (newState == ConnectionState.SUSPENDED || newState == 
> ConnectionState.LOST) {_
>  _lastState = newState;_
>  _}_
>  _else{_
>  _/* if connectionState is not SUSPENDED and LOST, reset lastState. */_
>  _lastState = null;_
>  _}_
>  _}_
>  _private ConnectionState getLastState() {_
>  _return lastState;_
>  _}_
> _}_
>  
> log:
> h6. 2019-09-16 13:38:38,020 INFO  
> org.apache.flink.shaded.zookeeper.org.apache.zookeeper.ClientCnxn  - Unable 
> to }}{{read}} {{additional data from server sessionid 0x26cff6487c2000e, 
> likely server has closed socket, closing socket connection and attempting 
> reconnect2019-09-16 13:38:38,122 INFO  
> org.apache.flink.shaded.curator.org.apache.curator.framework.state.ConnectionStateManager
>   - State change: SUSPENDED2019-09-16 13:38:38,123 WARN  
> org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService  - 
> Connection to ZooKeeper suspended. Can no longer retrieve the leader from 
> ZooKeeper.2019-09-16 13:38:38,126 WARN  
> org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService  - 
> Connection to ZooKeeper suspended. Can no longer retrieve the leader from 
> ZooKeeper.2019-09-16 13:38:38,126 WARN  
> org.apache.flink.runtime.jobmanager.ZooKeeperSubmittedJobGraphStore  - 
> ZooKeeper connection SUSPENDING. Changes to the submitted job graphs are not 
> monitored (temporarily).2019-09-16 13:38:38,128 WARN  
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService  - 
> Connection to ZooKeeper suspended. The contender 
> akka.tcp:}}{{//flink}}{{@node007224:19115}}{{/user/dispatcher}} {{no longer 
> participates }}{{in}} {{the leader election.2019-09-16 13:38:38,128 
> WARN  org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionService  
> - Connection to ZooKeeper suspended. The contender 
> akka.tcp:}}{{//flink}}{{@node007224:19115}}{{/user/resourcemanager}} {{no 
> longer participates }}{{in}} {{the leader election.2019-09-16 
> 13:38:38,128 WARN  
> org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalService  - 
> Connection to ZooKeeper suspended. Can no longer retrieve the leader from 
> ZooKeeper.2019-09-16 13:38:38,128 WARN  
> org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElec

[GitHub] [flink] TisonKun commented on issue #10754: [FLINK-14091][coordination] Allow updates to connection state when ZKCheckpointIDCounter reconnects to ZK

2020-01-15 Thread GitBox
TisonKun commented on issue #10754: [FLINK-14091][coordination] Allow updates 
to connection state when ZKCheckpointIDCounter reconnects to ZK
URL: https://github.com/apache/flink/pull/10754#issuecomment-574553764
 
 
   Thanks for reviewing and merging this patch!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15582) Enable batch scheduling tests in LegacySchedulerBatchSchedulingTest for DefaultScheduler as well

2020-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15582:
---
Labels: pull-request-available  (was: )

> Enable batch scheduling tests in LegacySchedulerBatchSchedulingTest for 
> DefaultScheduler as well
> 
>
> Key: FLINK-15582
> URL: https://issues.apache.org/jira/browse/FLINK-15582
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> {{testSchedulingOfJobWithFewerSlotsThanParallelism}} is a common case but it 
> is only tested with legacy scheduler in 
> {{LegacySchedulerBatchSchedulingTest}} at the moment.
> We should enable it for DefaultScheduler as well. 
> This also allows us to safely remove {{LegacySchedulerBatchSchedulingTest}} 
> when we are removing the LegacyScheduler and related components without 
> loosing test coverage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk opened a new pull request #10858: [FLINK-15582][tests] Enable batch scheduling tests for both LegacyScheduler and DefaultScheduler

2020-01-15 Thread GitBox
zhuzhurk opened a new pull request #10858: [FLINK-15582][tests] Enable batch 
scheduling tests for both LegacyScheduler and DefaultScheduler
URL: https://github.com/apache/flink/pull/10858
 
 
   ## What is the purpose of the change
   
   testSchedulingOfJobWithFewerSlotsThanParallelism is a common case but it is 
only tested with legacy scheduler in LegacySchedulerBatchSchedulingTest at the 
moment.
   We should enable it for DefaultScheduler as well.
   This also allows us to safely remove LegacySchedulerBatchSchedulingTest when 
we are removing the LegacyScheduler and related components without loosing test 
coverage.
   
   ## Brief change log
   
 - *Extract the batch scheduling tests from 
LegacySchedulerBatchSchedulingTest into BatchSchedulingTestBase*
 - *Adjusted LegacySchedulerBatchSchedulingTest*
 - *Added DefaultSchedulerBatchSchedulingTest*
   
   
   ## Verifying this change
   
   This change is on tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10858: [FLINK-15582][tests] Enable batch scheduling tests for both LegacyScheduler and DefaultScheduler

2020-01-15 Thread GitBox
flinkbot commented on issue #10858: [FLINK-15582][tests] Enable batch 
scheduling tests for both LegacyScheduler and DefaultScheduler
URL: https://github.com/apache/flink/pull/10858#issuecomment-574558301
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cf716ecc2f23aa89683be8096f16725b1f1f8d26 (Wed Jan 15 
08:51:42 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10829: [FLINK-14853][docs] Use higher granularity units in generated docs for Duration & MemorySize if possible

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10829: [FLINK-14853][docs] Use higher 
granularity units in generated docs for Duration & MemorySize if possible
URL: https://github.com/apache/flink/pull/10829#issuecomment-573052559
 
 
   
   ## CI report:
   
   * c289f16e336e54931105f5c3ec143f8a9fd69021 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143899744) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4259)
 
   * 3ca46b8869aa81fcb4df59a6cd45b6d3b16d7480 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144468485) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4357)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u opened a new pull request #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
kl0u opened a new pull request #10859: [FLINK-15278] Update the 
StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859
 
 
   ## What is the purpose of the change
   
   Updates the `StreamingFileSink` documentation to reflect the current state.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunhaibotb commented on issue #10151: [FLINK-14231] Handle the pending processing-time timers to make endInput semantics on the operator chain strict

2020-01-15 Thread GitBox
sunhaibotb commented on issue #10151: [FLINK-14231] Handle the pending 
processing-time timers to make endInput semantics on the operator chain strict
URL: https://github.com/apache/flink/pull/10151#issuecomment-574561071
 
 
   I have update the PR code to skip putting null into the chain. Please review 
it, thanks. @rkhachatryan 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10847: [FLINK-15578][connectors/jdbc] implement exactly once JDBC sink

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10847: [FLINK-15578][connectors/jdbc] 
implement exactly once JDBC sink
URL: https://github.com/apache/flink/pull/10847#issuecomment-573933799
 
 
   
   ## CI report:
   
   * 1f19ab63df12c2a0cbc75644a34bcab26c08d7f6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144244028) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4308)
 
   * 3e136166db3c3a2325a4014719ca011b4d162a4d Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144279590) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4318)
 
   * 3c3f7e4329383accb9b940c950321e0c65bdc0b9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15278) Update StreamingFileSink documentation

2020-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15278:
---
Labels: pull-request-available  (was: )

> Update StreamingFileSink documentation
> --
>
> Key: FLINK-15278
> URL: https://issues.apache.org/jira/browse/FLINK-15278
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem, Documentation
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> Many times in the ML we have seen questions about the {{StreamingFileSink}} 
> that could have been answered with better documentation that includes:
> 1) shortcomings (especially in the case of S3 and also bulk formats)
> 2) file lifecycle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10858: [FLINK-15582][tests] Enable batch scheduling tests for both LegacyScheduler and DefaultScheduler

2020-01-15 Thread GitBox
flinkbot commented on issue #10858: [FLINK-15582][tests] Enable batch 
scheduling tests for both LegacyScheduler and DefaultScheduler
URL: https://github.com/apache/flink/pull/10858#issuecomment-574561465
 
 
   
   ## CI report:
   
   * cf716ecc2f23aa89683be8096f16725b1f1f8d26 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
kl0u commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#issuecomment-574561566
 
 
   @gyfora could you have a look at this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
flinkbot commented on issue #10859: [FLINK-15278] Update the StreamingFileSink 
docs
URL: https://github.com/apache/flink/pull/10859#issuecomment-574562777
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2deddd56f82d4fae825e3a5413779b3e4d71a151 (Wed Jan 15 
09:03:51 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL commented on issue #10814: [FLINK-15434][Tests]Fix unstable tests in JobMasterTest

2020-01-15 Thread GitBox
GJL commented on issue #10814: [FLINK-15434][Tests]Fix unstable tests in 
JobMasterTest
URL: https://github.com/apache/flink/pull/10814#issuecomment-574563947
 
 
   Merging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha commented on a change in pull request #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
aljoscha commented on a change in pull request #10859: [FLINK-15278] Update the 
StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#discussion_r366760858
 
 

 ##
 File path: docs/dev/connectors/streamfile_sink.md
 ##
 @@ -361,7 +266,174 @@ input.addSink(sink)
 
 The SequenceFileWriterFactory supports additional constructor parameters to 
specify compression settings.
 
-### Important Considerations for S3
+## Bucket Assignment
+
+The bucketing logic defines how the data will be structured into 
subdirectories inside the base output directory.
+
+Both row and bulk formats (see [File Formats](#file-formats)) use the 
[DateTimeBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
 as the default assigner.
+By default the `DateTimeBucketAssigner` creates hourly buckets based on the 
system default timezone
+with the following format: `-MM-dd--HH`. Both the date format (*i.e.* 
bucket size) and timezone can be
+configured manually.
+
+We can specify a custom [BucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner.html)
 by calling `.withBucketAssigner(assigner)` on the format builders.
+
+Flink comes with two built in BucketAssigners:
+
+ - [DateTimeBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
 : Default time based assigner
+ - [BasePathBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/BasePathBucketAssigner.html)
 : Assigner that stores all part files in the base path (single global bucket)
+
+## Rolling Policy
+
+The [RollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/RollingPolicy.html)
 defines when a given in-progress part file will be closed and moved to the 
pending and later to finished state.
+Part files in the "finished" state are the ones that are ready for viewing and 
are guaranteed to contain valid data that will not be reverted in case of 
failure.
+The Rolling Policy in combination with the checkpointing interval (pending 
files become finished on the next checkpoint) control how quickly
+part files become available for downstream readers and also the size and 
number of these parts.
+
+Flink comes with two built-in RollingPolicies:
+
+ - [DefaultRollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/DefaultRollingPolicy.html)
+ - [OnCheckpointRollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/OnCheckpointRollingPolicy.html)
+
+## Part file lifecycle
+
+In order to use the output of the `StreamingFileSink` in downstream systems, 
we need to understand the naming and lifecycle of the output files produced.
+
+Part files can be in one of three states:
+ 1. **In-progress** : The part file that is currently being written to is 
in-progress
+ 2. **Pending** : Closed (due to the specified rolling policy) in-progress 
files that are waiting to be committed
+ 3. **Finished** : On successful checkpoints pending files transition to 
"Finished"
+
+Only finished files are safe to read by downstream systems as those are 
guaranteed to not be modified later.
+
+
+ IMPORTANT: Part file indexes are strictly increasing for any given 
subtask (in the order they were created). However these indexes are not always 
sequential. When the job restarts, the next part index for all subtask will be 
the `max part index + 1`
+where `max` is computed across all subtasks.
+
+
+Each writer subtask will have a single in-progress part file at any given time 
for every active bucket, but there can be several pending and finished files.
+
+**Part file example**
+
+To better understand the lifecycle of these files let's look at a simple 
example with 2 sink subtasks:
+
+```
+└── 2019-08-25--12
+├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
+└── part-1-0.inprogress.ea65a428-a1d0-4a0b-bbc5-7a436a75e575
+```
+
+When the part file `part-1-0` is rolled (let's say it becomes too large), it 
becomes pending but it is not renamed. The sink then opens a new part file: 
`part-1-1`:
+
+```
+└── 2019-08-25--12
+├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
+├── part-1-0.inprogress.ea65a428-a1d0-4a0b-bbc5-7a436a75e575
+└── part-1-1.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11
+```
+
+As `part-1-0` is now pending completion, after the next successful checkpoint, 
it is finalized:
+
+```
+└── 2019-08-25--12
+├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
+├── part-1-0
+└── part-1-1.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11
+```
+
+New buckets are created as dict

[GitHub] [flink] aljoscha commented on a change in pull request #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
aljoscha commented on a change in pull request #10859: [FLINK-15278] Update the 
StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#discussion_r366761154
 
 

 ##
 File path: docs/dev/connectors/streamfile_sink.md
 ##
 @@ -381,4 +453,6 @@ aggressively and take a savepoint with some part-files 
being not fully uploaded,
 before the job is restarted. This will result in your job not being able to 
restore from that savepoint as the
 pending part-files are no longer there and Flink will fail with an exception 
as it tries to fetch them and fails.
 
+
 
 Review comment:
   ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aljoscha commented on a change in pull request #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
aljoscha commented on a change in pull request #10859: [FLINK-15278] Update the 
StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#discussion_r366760554
 
 

 ##
 File path: docs/dev/connectors/streamfile_sink.md
 ##
 @@ -361,7 +266,174 @@ input.addSink(sink)
 
 The SequenceFileWriterFactory supports additional constructor parameters to 
specify compression settings.
 
-### Important Considerations for S3
+## Bucket Assignment
+
+The bucketing logic defines how the data will be structured into 
subdirectories inside the base output directory.
+
+Both row and bulk formats (see [File Formats](#file-formats)) use the 
[DateTimeBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
 as the default assigner.
+By default the `DateTimeBucketAssigner` creates hourly buckets based on the 
system default timezone
+with the following format: `-MM-dd--HH`. Both the date format (*i.e.* 
bucket size) and timezone can be
+configured manually.
+
+We can specify a custom [BucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/BucketAssigner.html)
 by calling `.withBucketAssigner(assigner)` on the format builders.
+
+Flink comes with two built in BucketAssigners:
+
+ - [DateTimeBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/DateTimeBucketAssigner.html)
 : Default time based assigner
+ - [BasePathBucketAssigner]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/bucketassigners/BasePathBucketAssigner.html)
 : Assigner that stores all part files in the base path (single global bucket)
+
+## Rolling Policy
+
+The [RollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/RollingPolicy.html)
 defines when a given in-progress part file will be closed and moved to the 
pending and later to finished state.
+Part files in the "finished" state are the ones that are ready for viewing and 
are guaranteed to contain valid data that will not be reverted in case of 
failure.
+The Rolling Policy in combination with the checkpointing interval (pending 
files become finished on the next checkpoint) control how quickly
+part files become available for downstream readers and also the size and 
number of these parts.
+
+Flink comes with two built-in RollingPolicies:
+
+ - [DefaultRollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/DefaultRollingPolicy.html)
+ - [OnCheckpointRollingPolicy]({{ site.javadocs_baseurl 
}}/api/java/org/apache/flink/streaming/api/functions/sink/filesystem/rollingpolicies/OnCheckpointRollingPolicy.html)
+
+## Part file lifecycle
+
+In order to use the output of the `StreamingFileSink` in downstream systems, 
we need to understand the naming and lifecycle of the output files produced.
+
+Part files can be in one of three states:
+ 1. **In-progress** : The part file that is currently being written to is 
in-progress
+ 2. **Pending** : Closed (due to the specified rolling policy) in-progress 
files that are waiting to be committed
+ 3. **Finished** : On successful checkpoints pending files transition to 
"Finished"
+
+Only finished files are safe to read by downstream systems as those are 
guaranteed to not be modified later.
+
+
+ IMPORTANT: Part file indexes are strictly increasing for any given 
subtask (in the order they were created). However these indexes are not always 
sequential. When the job restarts, the next part index for all subtask will be 
the `max part index + 1`
+where `max` is computed across all subtasks.
+
+
+Each writer subtask will have a single in-progress part file at any given time 
for every active bucket, but there can be several pending and finished files.
+
+**Part file example**
+
+To better understand the lifecycle of these files let's look at a simple 
example with 2 sink subtasks:
+
+```
+└── 2019-08-25--12
+├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
+└── part-1-0.inprogress.ea65a428-a1d0-4a0b-bbc5-7a436a75e575
+```
+
+When the part file `part-1-0` is rolled (let's say it becomes too large), it 
becomes pending but it is not renamed. The sink then opens a new part file: 
`part-1-1`:
+
+```
+└── 2019-08-25--12
+├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
+├── part-1-0.inprogress.ea65a428-a1d0-4a0b-bbc5-7a436a75e575
+└── part-1-1.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11
+```
+
+As `part-1-0` is now pending completion, after the next successful checkpoint, 
it is finalized:
+
+```
+└── 2019-08-25--12
+├── part-0-0.inprogress.bd053eb0-5ecf-4c85-8433-9eff486ac334
+├── part-1-0
+└── part-1-1.inprogress.bc279efe-b16f-47d8-b828-00ef6e2fbd11
+```
+
+New buckets are created as dict

[GitHub] [flink] GJL closed pull request #10814: [FLINK-15434][Tests]Fix unstable tests in JobMasterTest

2020-01-15 Thread GitBox
GJL closed pull request #10814: [FLINK-15434][Tests]Fix unstable tests in 
JobMasterTest
URL: https://github.com/apache/flink/pull/10814
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15434) testResourceManagerConnectionAfterRegainingLeadership test fail when run azure

2020-01-15 Thread Gary Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao closed FLINK-15434.

Resolution: Fixed

1.10: efe266fa9143446a33072a3cdece52c64b83f185
master: e1ad65b3a511fb81425ab5a9364db5661eac1a84

> testResourceManagerConnectionAfterRegainingLeadership test fail when run  
> azure
> ---
>
> Key: FLINK-15434
> URL: https://issues.apache.org/jira/browse/FLINK-15434
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.9.1
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Error message
> Expected: 
>  but: was <57f975298d116a9d7623f5a844ce6502>
>  
> Stack trace
> java.lang.AssertionError: Expected:  but: 
> was <57f975298d116a9d7623f5a844ce6502> at 
> org.apache.flink.runtime.jobmaster.JobMasterTest.testResourceManagerConnectionAfterRegainingLeadership(JobMasterTest.java:1033)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kl0u commented on a change in pull request #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
kl0u commented on a change in pull request #10859: [FLINK-15278] Update the 
StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#discussion_r366765926
 
 

 ##
 File path: docs/dev/connectors/streamfile_sink.md
 ##
 @@ -381,4 +453,6 @@ aggressively and take a savepoint with some part-files 
being not fully uploaded,
 before the job is restarted. This will result in your job not being able to 
restore from that savepoint as the
 pending part-files are no longer there and Flink will fail with an exception 
as it tries to fetch them and fails.
 
+
 
 Review comment:
   A forgotten "note to self" about what to include. Removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
kl0u commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#issuecomment-574568852
 
 
   I integrated your comments @aljoscha . Let me know what you think.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-5601) Window operator does not checkpoint watermarks

2020-01-15 Thread Jaryzhen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015751#comment-17015751
 ] 

Jaryzhen commented on FLINK-5601:
-

hi [~wind_ljy] Any new process on this issue?  I have the same problem on our 
production business.

> Window operator does not checkpoint watermarks
> --
>
> Key: FLINK-5601
> URL: https://issues.apache.org/jira/browse/FLINK-5601
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0
>Reporter: Ufuk Celebi
>Assignee: Jiayi Liao
>Priority: Critical
>  Labels: pull-request-available
>
> During release testing [~stefanrichte...@gmail.com] and I noticed that 
> watermarks are not checkpointed in the window operator.
> This can lead to non determinism when restoring checkpoints. I was running an 
> adjusted {{SessionWindowITCase}} via Kafka for testing migration and 
> rescaling and ran into failures, because the data generator required 
> determinisitic behaviour.
> What happened was that on restore it could happen that late elements were not 
> dropped, because the watermarks needed to be re-established after restore 
> first.
> [~aljoscha] Do you know whether there is a special reason for explicitly not 
> checkpointing watermarks?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15145) Tune default values for FLIP-49 TM memory configurations with real production jobs.

2020-01-15 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17011632#comment-17011632
 ] 

Xintong Song edited comment on FLINK-15145 at 1/15/20 9:32 AM:
---

I have come up with a [tuning 
report|https://docs.google.com/document/d/1-LravhQYUIkXb7rh0XnBB78vSvhp3ecLSAgsiabfVkk/edit?usp=sharing].

In summary, I would suggest the following changes.
- Change default managed memory fraction from 0.4 to 0.3.
- Change default JVM metaspace size from 128MB to 64MB.
- Change default JVM overhead min size from 128MB to 192MB.


was (Author: xintongsong):
I have come up with a [tuning 
report|https://docs.google.com/document/d/1-LravhQYUIkXb7rh0XnBB78vSvhp3ecLSAgsiabfVkk/edit?usp=sharing].

In summary, I would suggest the following changes.
- Change default managed memory fraction from 0.4 to 0.3.
- Change default JVM metaspace size from 128MB to 64MB.
- Change default JVM overhead min size from 128MB to 196MB.

> Tune default values for FLIP-49 TM memory configurations with real production 
> jobs.
> ---
>
> Key: FLINK-15145
> URL: https://issues.apache.org/jira/browse/FLINK-15145
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
> Fix For: 1.10.0
>
>
> In release 1.10, with FLIP-49, we introduced significant changes to the 
> TaskExecutor memory model and it's related configuration options / logics.
> Since the model and configuration logics are changed, it is reasonable that 
> we also change the default configuration values. Currently, the default 
> values are set with the gut feelings and experiences from e2e tests. It would 
> be good that we try and tune the configurations with some real production 
> jobs, of various scales if possible, before exposing the configurations in 
> the release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10829: [FLINK-14853][docs] Use higher granularity units in generated docs for Duration & MemorySize if possible

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10829: [FLINK-14853][docs] Use higher 
granularity units in generated docs for Duration & MemorySize if possible
URL: https://github.com/apache/flink/pull/10829#issuecomment-573052559
 
 
   
   ## CI report:
   
   * c289f16e336e54931105f5c3ec143f8a9fd69021 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143899744) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4259)
 
   * 3ca46b8869aa81fcb4df59a6cd45b6d3b16d7480 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144468485) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4357)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10847: [FLINK-15578][connectors/jdbc] implement exactly once JDBC sink

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10847: [FLINK-15578][connectors/jdbc] 
implement exactly once JDBC sink
URL: https://github.com/apache/flink/pull/10847#issuecomment-573933799
 
 
   
   ## CI report:
   
   * 1f19ab63df12c2a0cbc75644a34bcab26c08d7f6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144244028) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4308)
 
   * 3e136166db3c3a2325a4014719ca011b4d162a4d Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144279590) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4318)
 
   * 3c3f7e4329383accb9b940c950321e0c65bdc0b9 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144472473) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4358)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10845: [FLINK-15355][plugins]Classloader restricts access to parent to whitelist.

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10845: [FLINK-15355][plugins]Classloader 
restricts access to parent to whitelist.
URL: https://github.com/apache/flink/pull/10845#issuecomment-573736294
 
 
   
   ## CI report:
   
   * 0942cb8b913ba50ecf8d7ca28832c4c92bf78e6c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144178023) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4303)
 
   * ce62e6539d1394a5d27d8ef51db010104852e433 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144207965) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4304)
 
   * 76bd16a406b4bbbee5d4189b66f1fdd36f98798f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144384071) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4339)
 
   * 726f81492a05cc47657a9c30caf2e397e8c1bd02 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144404404) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4343)
 
   * 18934ef582e5ba278289541d8602b081c4f63367 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/14448) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4345)
 
   * ebd3076a0b24751eca2dce1d3faf646b56fe6426 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch scheduling tests for both LegacyScheduler and DefaultScheduler

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch 
scheduling tests for both LegacyScheduler and DefaultScheduler
URL: https://github.com/apache/flink/pull/10858#issuecomment-574561465
 
 
   
   ## CI report:
   
   * cf716ecc2f23aa89683be8096f16725b1f1f8d26 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144472558) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4359)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
flinkbot commented on issue #10859: [FLINK-15278] Update the StreamingFileSink 
docs
URL: https://github.com/apache/flink/pull/10859#issuecomment-574575929
 
 
   
   ## CI report:
   
   * 8da4414253fa8283acdd509d6be5c26903537ee4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong opened a new pull request #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
xintongsong opened a new pull request #10860: [FLINK-15145][config] Change TM 
memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860
 
 
   ## What is the purpose of the change
   
   This PR updates FLIP-49 TM memory sizes configuration default values 
according to the outcome of tuning with real jobs.
   
   ## Brief change log
   
   - JVM overhead min: 128MB -> 192MB
   - JVM metaspace: 128MB -> 96MB
   - Total process size (in default flink-conf.yaml): 1024MB -> 1568MB
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15145) Tune default values for FLIP-49 TM memory configurations with real production jobs.

2020-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15145:
---
Labels: pull-request-available  (was: )

> Tune default values for FLIP-49 TM memory configurations with real production 
> jobs.
> ---
>
> Key: FLINK-15145
> URL: https://issues.apache.org/jira/browse/FLINK-15145
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> In release 1.10, with FLIP-49, we introduced significant changes to the 
> TaskExecutor memory model and it's related configuration options / logics.
> Since the model and configuration logics are changed, it is reasonable that 
> we also change the default configuration values. Currently, the default 
> values are set with the gut feelings and experiences from e2e tests. It would 
> be good that we try and tune the configurations with some real production 
> jobs, of various scales if possible, before exposing the configurations in 
> the release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kl0u edited a comment on issue #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
kl0u edited a comment on issue #10859: [FLINK-15278] Update the 
StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#issuecomment-574568852
 
 
   I integrated your comments @aljoscha . I will merge


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15597) Relax sanity check of JVM memory overhead to be within its min/max

2020-01-15 Thread Andrey Zagrebin (Jira)
Andrey Zagrebin created FLINK-15597:
---

 Summary: Relax sanity check of JVM memory overhead to be within 
its min/max
 Key: FLINK-15597
 URL: https://issues.apache.org/jira/browse/FLINK-15597
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration, Runtime / Task
Reporter: Andrey Zagrebin
Assignee: Xintong Song
 Fix For: 1.10.0


When the explicitly configured process and Flink memory sizes are verified with 
the JVM meta space and overhead, JVM overhead does not have to be the exact 
fraction.
It can be just within its min/max range, similar to how it is now for 
network/shuffle memory check after FLINK-15300.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kl0u closed pull request #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
kl0u closed pull request #10859: [FLINK-15278] Update the StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs

2020-01-15 Thread GitBox
kl0u commented on issue #10859: [FLINK-15278] Update the StreamingFileSink docs
URL: https://github.com/apache/flink/pull/10859#issuecomment-574576922
 
 
   Merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
flinkbot commented on issue #10860: [FLINK-15145][config] Change TM memory 
configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#issuecomment-574577044
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5f9cd2224ef458e5068f3179aa5be025848f8723 (Wed Jan 15 
09:40:04 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15278) Update StreamingFileSink documentation

2020-01-15 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-15278:
---
Affects Version/s: 1.10.0

> Update StreamingFileSink documentation
> --
>
> Key: FLINK-15278
> URL: https://issues.apache.org/jira/browse/FLINK-15278
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem, Documentation
>Affects Versions: 1.10.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Many times in the ML we have seen questions about the {{StreamingFileSink}} 
> that could have been answered with better documentation that includes:
> 1) shortcomings (especially in the case of S3 and also bulk formats)
> 2) file lifecycle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15278) Update StreamingFileSink documentation

2020-01-15 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-15278:
---
Fix Version/s: 1.10.0

> Update StreamingFileSink documentation
> --
>
> Key: FLINK-15278
> URL: https://issues.apache.org/jira/browse/FLINK-15278
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem, Documentation
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Many times in the ML we have seen questions about the {{StreamingFileSink}} 
> that could have been answered with better documentation that includes:
> 1) shortcomings (especially in the case of S3 and also bulk formats)
> 2) file lifecycle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15278) Update StreamingFileSink documentation

2020-01-15 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas closed FLINK-15278.
--
Resolution: Resolved

Merged on master with cec75ef072fb8498725ef1e5a55be6838af40a42
and on release-1.10 with 1ec6f99a21245704a03ef519f1fcf19cdb030a09

> Update StreamingFileSink documentation
> --
>
> Key: FLINK-15278
> URL: https://issues.apache.org/jira/browse/FLINK-15278
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem, Documentation
>Affects Versions: 1.10.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Many times in the ML we have seen questions about the {{StreamingFileSink}} 
> that could have been answered with better documentation that includes:
> 1) shortcomings (especially in the case of S3 and also bulk formats)
> 2) file lifecycle



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] GJL commented on issue #10842: [FLINK-15489][web]: add cache control no-cache to log api

2020-01-15 Thread GitBox
GJL commented on issue #10842: [FLINK-15489][web]: add cache control no-cache 
to log api
URL: https://github.com/apache/flink/pull/10842#issuecomment-574578755
 
 
   I can confirm that this change works.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL commented on issue #10842: [FLINK-15489][web]: add cache control no-cache to log api

2020-01-15 Thread GitBox
GJL commented on issue #10842: [FLINK-15489][web]: add cache control no-cache 
to log api
URL: https://github.com/apache/flink/pull/10842#issuecomment-574580660
 
 
   I should add that for the TM logs caching was not an issue before. 
`AbstractTaskManagerFileHandler` does not seem to set the Cache-Control header.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10151: [FLINK-14231] Handle the pending processing-time timers to make endInput semantics on the operator chain strict

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10151: [FLINK-14231] Handle the pending 
processing-time timers to make endInput semantics on the operator chain strict
URL: https://github.com/apache/flink/pull/10151#issuecomment-552442468
 
 
   
   ## CI report:
   
   * c6d7f5e864076448dca590035a6a590dc5e25c44 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/135928709) 
   * 682da0aec5dee14c09583468d15115e2a512c827 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140139283) 
   * 9c66fbe1e4d81c3656eba38d56d39dfe0c065f4f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142108012) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3864)
 
   * 1b23c2232e9717218a7c61c930c481cbcf2e6f2e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142214794) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3888)
 
   * 063b5e87dcef4a363f01a48e4af4fb9d3670429f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142218414) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3890)
 
   * d5dfd65a163634584e8eaeee452d5454b2d4fe45 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143576795) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4193)
 
   * d1f5b89012dc266fc0c664085c1a2aef0c8b95ec Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143669093) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4204)
 
   * f61c52bcb1420572c2ad94e3d2f1caafbf7f6081 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143708993) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4215)
 
   * 9ea2c427c8f7046d78c122eea8f2d0c10c200224 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL edited a comment on issue #10842: [FLINK-15489][web]: add cache control no-cache to log api

2020-01-15 Thread GitBox
GJL edited a comment on issue #10842: [FLINK-15489][web]: add cache control 
no-cache to log api
URL: https://github.com/apache/flink/pull/10842#issuecomment-574580660
 
 
   I should add that for the TM logs caching was not an issue before. 
`AbstractTaskManagerFileHandler` does not seem to set the Cache-Control header. 
I think it doesn't hurt to set `Cache-Control: no-cache`, however.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL edited a comment on issue #10842: [FLINK-15489][web]: add cache control no-cache to log api

2020-01-15 Thread GitBox
GJL edited a comment on issue #10842: [FLINK-15489][web]: add cache control 
no-cache to log api
URL: https://github.com/apache/flink/pull/10842#issuecomment-574580660
 
 
   I should add that for the TM logs caching was not an issue before. 
`AbstractTaskManagerFileHandler` does not seem to set the Cache-Control header. 
I think it doesn't hurt to set `Cache-Control: no-cache` for both cases, 
however.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] GJL edited a comment on issue #10842: [FLINK-15489][web]: add cache control no-cache to log api

2020-01-15 Thread GitBox
GJL edited a comment on issue #10842: [FLINK-15489][web]: add cache control 
no-cache to log api
URL: https://github.com/apache/flink/pull/10842#issuecomment-574580660
 
 
   I should add that for the TM logs caching was not an issue before. 
`AbstractTaskManagerFileHandler` does not seem to set the Cache-Control header. 
I think it doesn't hurt to set `Cache-Control: no-cache` for both cases, 
however.
   
   @vthinkxie Should we also backport this to 1.9?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yanghua opened a new pull request #10861: [FLINK-15558][Connector] Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector

2020-01-15 Thread GitBox
yanghua opened a new pull request #10861: [FLINK-15558][Connector] Bump 
Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector
URL: https://github.com/apache/flink/pull/10861
 
 
   
   
   ## What is the purpose of the change
   
   *This pull request bumps Elasticsearch version from 7.3.2 to 7.5.1 for es7 
connector*
   
   ## Brief change log
   
 - *Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector*
 - *Update NOTICE FILE*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**yes** / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15558) Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector

2020-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15558:
---
Labels: pull-request-available  (was: )

> Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector
> 
>
> Key: FLINK-15558
> URL: https://issues.apache.org/jira/browse/FLINK-15558
> Project: Flink
>  Issue Type: Wish
>  Components: Connectors / ElasticSearch
>Reporter: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> It would be better to track the newest ES 7.x client version just like we 
> have done for Kafka universal connector.
> Currently, the ES7 connector track version 7.3.2 and the latest ES 7.x 
> version is 7.5.1. We can upgrade it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10861: [FLINK-15558][Connector] Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector

2020-01-15 Thread GitBox
flinkbot commented on issue #10861: [FLINK-15558][Connector] Bump Elasticsearch 
version from 7.3.2 to 7.5.1 for es7 connector
URL: https://github.com/apache/flink/pull/10861#issuecomment-574583824
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f4506ad0e9302a279940670fa90c305c240b3c12 (Wed Jan 15 
09:56:26 UTC 2020)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15558).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15530) Replace process memory with flink memory for TMs in default flink-conf.yaml

2020-01-15 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-15530.

Resolution: Won't Do

> Replace process memory with flink memory for TMs in default flink-conf.yaml
> ---
>
> Key: FLINK-15530
> URL: https://issues.apache.org/jira/browse/FLINK-15530
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The issue is based on the discussion outcome on [Dev 
> ML|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-td36129.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15564) YarnClusterDescriptorTest failed to validate the original intended behavior

2020-01-15 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015804#comment-17015804
 ] 

Xintong Song commented on FLINK-15564:
--

According to the discussion in this [ML 
thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-tc36129.html],
 we decided not to do this change.

> YarnClusterDescriptorTest failed to validate the original intended behavior
> ---
>
> Key: FLINK-15564
> URL: https://issues.apache.org/jira/browse/FLINK-15564
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>  Labels: pull-request-available, testability
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following test cases of {{YarnClusterDescriptorTest}} have failed to 
> validate the original intended behavior and are temporally skipped by 
> PR#10834.
>  - {{testFailIfTaskSlotsHigherThanMaxVcores}}
>  - {{testConfigOverwrite}}
> The original purpose of these two test cases was to verify the validation 
> logic against yarn max allocation vcores (in 
> {{5836f7eddb4849b95d4860cf20045bc61d061918}}).
> These two cases should have failed when we change the validation logic to get 
> yarn max allocation vcores from yarnClient instead of configuration (in 
> {{e959e6d0cd42f0c5b21c0f03ce547f2025ac58d5}}), because there are no yarn 
> cluster (neither {{MiniYARNCluster}}) started in these cases, thus 
> {{yarnClient#getNodeReports}} will never return.
> The cases have not failed because another {{IllegalConfigurationException}} 
> was thrown in {{validateClusterSpecification}}, because of memory validation 
> failure. The memory validation failure was by design, and in order to verify 
> the original purpose these two test cases should have been updated with 
> reasonable memory sizes, which is unfortunately overlooked. 
> The problem could be fixed with the following changes:
> - Update the memory setups for the test cases, to pass the memory validation 
> and thus validate the original intended behavior.
> - Extract the logic of getting yarn max allocation vcores into a separate 
> method, and override it in the test cases to provide a constant max vcores.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15530) Replace process memory with flink memory for TMs in default flink-conf.yaml

2020-01-15 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015805#comment-17015805
 ] 

Xintong Song commented on FLINK-15530:
--

According to the discussion in this [ML 
thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-tc36129.html],
 we decided not to do this change.

> Replace process memory with flink memory for TMs in default flink-conf.yaml
> ---
>
> Key: FLINK-15530
> URL: https://issues.apache.org/jira/browse/FLINK-15530
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The issue is based on the discussion outcome on [Dev 
> ML|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-td36129.html].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong closed pull request #10834: [FLINK-15530][dist] Replace process memory with flink memory for TMs in default flink-conf.yaml

2020-01-15 Thread GitBox
xintongsong closed pull request #10834: [FLINK-15530][dist] Replace process 
memory with flink memory for TMs in default flink-conf.yaml
URL: https://github.com/apache/flink/pull/10834
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong commented on issue #10834: [FLINK-15530][dist] Replace process memory with flink memory for TMs in default flink-conf.yaml

2020-01-15 Thread GitBox
xintongsong commented on issue #10834: [FLINK-15530][dist] Replace process 
memory with flink memory for TMs in default flink-conf.yaml
URL: https://github.com/apache/flink/pull/10834#issuecomment-574587558
 
 
   According to the discussion in the [ML 
thread](http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-tc36129.html),
 we decided not to do this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15145) Tune default values for FLIP-49 TM memory configurations with real production jobs.

2020-01-15 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015801#comment-17015801
 ] 

Xintong Song commented on FLINK-15145:
--

According to the conclusion of the [ML 
discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Discuss-Tuning-FLIP-49-configuration-default-values-td36528.html],
 we decided to make the following changes.
* Change default value of "taskmanager.memory.jvm-overhead.min" to 192MB.
* Change default value of "taskmanager.memory.jvm-metaspace.size" to 96MB.
* Change the value of "taskmanager.memory.process.size" in the default 
"flink-conf.yaml" to 1568MB.

> Tune default values for FLIP-49 TM memory configurations with real production 
> jobs.
> ---
>
> Key: FLINK-15145
> URL: https://issues.apache.org/jira/browse/FLINK-15145
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Configuration
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In release 1.10, with FLIP-49, we introduced significant changes to the 
> TaskExecutor memory model and it's related configuration options / logics.
> Since the model and configuration logics are changed, it is reasonable that 
> we also change the default configuration values. Currently, the default 
> values are set with the gut feelings and experiences from e2e tests. It would 
> be good that we try and tune the configurations with some real production 
> jobs, of various scales if possible, before exposing the configurations in 
> the release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-15564) YarnClusterDescriptorTest failed to validate the original intended behavior

2020-01-15 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-15564:
-
Comment: was deleted

(was: According to the discussion in this [ML 
thread|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Some-feedback-after-trying-out-the-new-FLIP-49-memory-configurations-tc36129.html],
 we decided not to do this change.)

> YarnClusterDescriptorTest failed to validate the original intended behavior
> ---
>
> Key: FLINK-15564
> URL: https://issues.apache.org/jira/browse/FLINK-15564
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Major
>  Labels: pull-request-available, testability
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following test cases of {{YarnClusterDescriptorTest}} have failed to 
> validate the original intended behavior and are temporally skipped by 
> PR#10834.
>  - {{testFailIfTaskSlotsHigherThanMaxVcores}}
>  - {{testConfigOverwrite}}
> The original purpose of these two test cases was to verify the validation 
> logic against yarn max allocation vcores (in 
> {{5836f7eddb4849b95d4860cf20045bc61d061918}}).
> These two cases should have failed when we change the validation logic to get 
> yarn max allocation vcores from yarnClient instead of configuration (in 
> {{e959e6d0cd42f0c5b21c0f03ce547f2025ac58d5}}), because there are no yarn 
> cluster (neither {{MiniYARNCluster}}) started in these cases, thus 
> {{yarnClient#getNodeReports}} will never return.
> The cases have not failed because another {{IllegalConfigurationException}} 
> was thrown in {{validateClusterSpecification}}, because of memory validation 
> failure. The memory validation failure was by design, and in order to verify 
> the original purpose these two test cases should have been updated with 
> reasonable memory sizes, which is unfortunately overlooked. 
> The problem could be fixed with the following changes:
> - Update the memory setups for the test cases, to pass the memory validation 
> and thus validate the original intended behavior.
> - Extract the logic of getting yarn max allocation vcores into a separate 
> method, and override it in the test cases to provide a constant max vcores.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10847: [FLINK-15578][connectors/jdbc] implement exactly once JDBC sink

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10847: [FLINK-15578][connectors/jdbc] 
implement exactly once JDBC sink
URL: https://github.com/apache/flink/pull/10847#issuecomment-573933799
 
 
   
   ## CI report:
   
   * 1f19ab63df12c2a0cbc75644a34bcab26c08d7f6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144244028) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4308)
 
   * 3e136166db3c3a2325a4014719ca011b4d162a4d Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144279590) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4318)
 
   * 3c3f7e4329383accb9b940c950321e0c65bdc0b9 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144472473) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4358)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10845: [FLINK-15355][plugins]Classloader restricts access to parent to whitelist.

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10845: [FLINK-15355][plugins]Classloader 
restricts access to parent to whitelist.
URL: https://github.com/apache/flink/pull/10845#issuecomment-573736294
 
 
   
   ## CI report:
   
   * 0942cb8b913ba50ecf8d7ca28832c4c92bf78e6c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144178023) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4303)
 
   * ce62e6539d1394a5d27d8ef51db010104852e433 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144207965) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4304)
 
   * 76bd16a406b4bbbee5d4189b66f1fdd36f98798f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144384071) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4339)
 
   * 726f81492a05cc47657a9c30caf2e397e8c1bd02 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144404404) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4343)
 
   * 18934ef582e5ba278289541d8602b081c4f63367 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/14448) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4345)
 
   * ebd3076a0b24751eca2dce1d3faf646b56fe6426 UNKNOWN
   * ac9c94daa036cec2cb2bb2d53890e67594046479 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch scheduling tests for both LegacyScheduler and DefaultScheduler

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch 
scheduling tests for both LegacyScheduler and DefaultScheduler
URL: https://github.com/apache/flink/pull/10858#issuecomment-574561465
 
 
   
   ## CI report:
   
   * cf716ecc2f23aa89683be8096f16725b1f1f8d26 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144472558) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4359)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10861: [FLINK-15558][Connector] Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector

2020-01-15 Thread GitBox
flinkbot commented on issue #10861: [FLINK-15558][Connector] Bump Elasticsearch 
version from 7.3.2 to 7.5.1 for es7 connector
URL: https://github.com/apache/flink/pull/10861#issuecomment-574591161
 
 
   
   ## CI report:
   
   * f4506ad0e9302a279940670fa90c305c240b3c12 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
flinkbot commented on issue #10860: [FLINK-15145][config] Change TM memory 
configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#issuecomment-574591065
 
 
   
   ## CI report:
   
   * 5f9cd2224ef458e5068f3179aa5be025848f8723 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xintongsong opened a new pull request #10862: [FLINK-15597][runtime] Relax sanity check of JVM memory overhead to be within its min/max

2020-01-15 Thread GitBox
xintongsong opened a new pull request #10862: [FLINK-15597][runtime] Relax 
sanity check of JVM memory overhead to be within its min/max
URL: https://github.com/apache/flink/pull/10862
 
 
   ## What is the purpose of the change
   
   This PR relax JVM overhead sanity check, allowing JVM overhead not strictly 
following the configured fraction.
   
   ## Verifying this change
   
   - 
TaskExecutorResourceUtilsTest#testConfigJvmOverheadDeriveFromProcessAndFlinkMemorySize
   - 
TaskExecutorResourceUtilsTest#testConfigJvmOverheadDeriveFromProcessAndFlinkMemorySizeFailure
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15597) Relax sanity check of JVM memory overhead to be within its min/max

2020-01-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15597:
---
Labels: pull-request-available  (was: )

> Relax sanity check of JVM memory overhead to be within its min/max
> --
>
> Key: FLINK-15597
> URL: https://issues.apache.org/jira/browse/FLINK-15597
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration, Runtime / Task
>Reporter: Andrey Zagrebin
>Assignee: Xintong Song
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> When the explicitly configured process and Flink memory sizes are verified 
> with the JVM meta space and overhead, JVM overhead does not have to be the 
> exact fraction.
> It can be just within its min/max range, similar to how it is now for 
> network/shuffle memory check after FLINK-15300.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10862: [FLINK-15597][runtime] Relax sanity check of JVM memory overhead to be within its min/max

2020-01-15 Thread GitBox
flinkbot commented on issue #10862: [FLINK-15597][runtime] Relax sanity check 
of JVM memory overhead to be within its min/max
URL: https://github.com/apache/flink/pull/10862#issuecomment-574594080
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9e5e952e448571968cca13475d221ce4a62689db (Wed Jan 15 
10:21:21 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15599) SQL client requires both legacy and blink planner to be on the classpath

2020-01-15 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-15599:


 Summary: SQL client requires both legacy and blink planner to be 
on the classpath
 Key: FLINK-15599
 URL: https://issues.apache.org/jira/browse/FLINK-15599
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: Dawid Wysakowicz
 Fix For: 1.10.0


Sql client uses directly some of the internal classes of the legacy planner, 
thus it does not work with only the blink planner on the classpath.

The internal class that's being used is 
{{org.apache.flink.table.functions.FunctionService}}

This dependency was introduced in FLINK-13195



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15598) Memory accuracy loss in YarnClusterDescriptor may lead to deployment failure.

2020-01-15 Thread Xintong Song (Jira)
Xintong Song created FLINK-15598:


 Summary: Memory accuracy loss in YarnClusterDescriptor may lead to 
deployment failure.
 Key: FLINK-15598
 URL: https://issues.apache.org/jira/browse/FLINK-15598
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Reporter: Xintong Song
 Fix For: 1.10.0


Currently, YarnClusterDescriptor parses/derives TM process memory size from 
configuration, store it in ClusterSpecification and validate 
ClusterSpecification, then overwrite the memory size back to configuration.

This logic is unnecessary. The memory validation is already covered by creating 
TaskExecutorResourceSpec from configuration in TaskExecutorResourceUtils.

Moreover, the memory size is stored in MB in ClusterSpecification. The accuracy 
loss may lead to memory validation failure, which prevent the cluster from 
being deployed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on a change in pull request #10853: [Flink-15583] Fixing scala walkthrough archetype does not compile on Java 11

2020-01-15 Thread GitBox
zentol commented on a change in pull request #10853: [Flink-15583] Fixing scala 
walkthrough archetype does not compile on Java 11
URL: https://github.com/apache/flink/pull/10853#discussion_r366797802
 
 

 ##
 File path: flink-end-to-end-tests/test-scripts/test_datastream_walkthroughs.sh
 ##
 @@ -35,6 +35,7 @@ mvn archetype:generate   
   \
 -DarchetypeGroupId=org.apache.flink \
 -DarchetypeArtifactId=flink-walkthrough-datastream-${TEST_TYPE}  \
 -DarchetypeVersion=${FLINK_VERSION} \
+-DarchetypeCatalog=local\
 
 Review comment:
   did you double-check whether this is still necessary?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15599) SQL client requires both legacy and blink planner to be on the classpath

2020-01-15 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015816#comment-17015816
 ] 

Dawid Wysakowicz commented on FLINK-15599:
--

cc [~ykt836] [~danny0405]

> SQL client requires both legacy and blink planner to be on the classpath
> 
>
> Key: FLINK-15599
> URL: https://issues.apache.org/jira/browse/FLINK-15599
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.10.0
>
>
> Sql client uses directly some of the internal classes of the legacy planner, 
> thus it does not work with only the blink planner on the classpath.
> The internal class that's being used is 
> {{org.apache.flink.table.functions.FunctionService}}
> This dependency was introduced in FLINK-13195



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kl0u commented on issue #10833: [FLINK-15535][documentation] Add usage of ProcessFunctionTestHarnesses for testing documentation

2020-01-15 Thread GitBox
kl0u commented on issue #10833: [FLINK-15535][documentation] Add usage of 
ProcessFunctionTestHarnesses for testing documentation
URL: https://github.com/apache/flink/pull/10833#issuecomment-574596526
 
 
   Thanks for the work @yanghua ! Merging this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10151: [FLINK-14231] Handle the pending processing-time timers to make endInput semantics on the operator chain strict

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10151: [FLINK-14231] Handle the pending 
processing-time timers to make endInput semantics on the operator chain strict
URL: https://github.com/apache/flink/pull/10151#issuecomment-552442468
 
 
   
   ## CI report:
   
   * c6d7f5e864076448dca590035a6a590dc5e25c44 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/135928709) 
   * 682da0aec5dee14c09583468d15115e2a512c827 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/140139283) 
   * 9c66fbe1e4d81c3656eba38d56d39dfe0c065f4f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142108012) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3864)
 
   * 1b23c2232e9717218a7c61c930c481cbcf2e6f2e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142214794) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3888)
 
   * 063b5e87dcef4a363f01a48e4af4fb9d3670429f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142218414) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3890)
 
   * d5dfd65a163634584e8eaeee452d5454b2d4fe45 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143576795) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4193)
 
   * d1f5b89012dc266fc0c664085c1a2aef0c8b95ec Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143669093) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4204)
 
   * f61c52bcb1420572c2ad94e3d2f1caafbf7f6081 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/143708993) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4215)
 
   * 9ea2c427c8f7046d78c122eea8f2d0c10c200224 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144479909) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4362)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u closed pull request #10833: [FLINK-15535][documentation] Add usage of ProcessFunctionTestHarnesses for testing documentation

2020-01-15 Thread GitBox
kl0u closed pull request #10833: [FLINK-15535][documentation] Add usage of 
ProcessFunctionTestHarnesses for testing documentation
URL: https://github.com/apache/flink/pull/10833
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u commented on issue #10833: [FLINK-15535][documentation] Add usage of ProcessFunctionTestHarnesses for testing documentation

2020-01-15 Thread GitBox
kl0u commented on issue #10833: [FLINK-15535][documentation] Add usage of 
ProcessFunctionTestHarnesses for testing documentation
URL: https://github.com/apache/flink/pull/10833#issuecomment-574598184
 
 
   Merged


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
azagrebin commented on a change in pull request #10860: [FLINK-15145][config] 
Change TM memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#discussion_r366787995
 
 

 ##
 File path: flink-dist/src/main/resources/flink-conf.yaml
 ##
 @@ -42,9 +42,12 @@ jobmanager.rpc.port: 6123
 jobmanager.heap.size: 1024m
 
 
-# The heap size for the TaskManager JVM
+# The total process memory size for the TaskManager.
+#
+# Note this accounts for all memory usages of a TaskManager process, including 
JVM metaspace and other overheads.
 
 Review comment:
   ```suggestion
   # Note this accounts for all memory usage within the TaskExecutor process, 
including JVM metaspace and other overhead.
   # For containerised environment (Yarn/Mesos), it is similar to the 
deprecated option
   # 'taskmanager.heap.size' which included the deprecated 
'containerized.heap-cutoff*'.
   
   taskmanager.memory.process.size: 1568m
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
azagrebin commented on a change in pull request #10860: [FLINK-15145][config] 
Change TM memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#discussion_r366787228
 
 

 ##
 File path: flink-dist/src/main/resources/flink-conf.yaml
 ##
 @@ -42,9 +42,12 @@ jobmanager.rpc.port: 6123
 jobmanager.heap.size: 1024m
 
 
-# The heap size for the TaskManager JVM
+# The total process memory size for the TaskManager.
+#
+# Note this accounts for all memory usages of a TaskManager process, including 
JVM metaspace and other overheads.
+# To exclude JVM metaspace and other overheads, please use total flink memory 
size (taskmanager.memory.flink.size) instead.
 
 Review comment:
   ```suggestion
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
azagrebin commented on a change in pull request #10860: [FLINK-15145][config] 
Change TM memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#discussion_r366787685
 
 

 ##
 File path: flink-dist/src/main/resources/flink-conf.yaml
 ##
 @@ -42,9 +42,12 @@ jobmanager.rpc.port: 6123
 jobmanager.heap.size: 1024m
 
 
-# The heap size for the TaskManager JVM
+# The total process memory size for the TaskManager.
 
 Review comment:
   ```suggestion
   # The total process memory size for the TaskExecutor.
   ```
   I think this the most recent term for task manager after FLIP-6


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] azagrebin commented on a change in pull request #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
azagrebin commented on a change in pull request #10860: [FLINK-15145][config] 
Change TM memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#discussion_r366795773
 
 

 ##
 File path: flink-dist/src/main/resources/flink-conf.yaml
 ##
 @@ -42,9 +42,12 @@ jobmanager.rpc.port: 6123
 jobmanager.heap.size: 1024m
 
 
-# The heap size for the TaskManager JVM
+# The total process memory size for the TaskManager.
+#
+# Note this accounts for all memory usages of a TaskManager process, including 
JVM metaspace and other overheads.
+# To exclude JVM metaspace and other overheads, please use total flink memory 
size (taskmanager.memory.flink.size) instead.
 
-taskmanager.memory.process.size: 1024m
+taskmanager.memory.process.size: 1568m
 
 Review comment:
   ```suggestion
   
   # To exclude JVM metaspace and overhead, please, use total Flink memory size 
instead of 'taskmanager.memory.process.size'.
   # It is not recommended to set both 'taskmanager.memory.process.size' and 
Flink memory.
   # This option is similar to the deprecated option 'taskmanager.heap.size' 
for standalone environment:
   #
   # taskmanager.memory.flink.size: 1280
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15535) Add usage of ProcessFunctionTestHarnesses for testing documentation

2020-01-15 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas closed FLINK-15535.
--
Fix Version/s: 1.10.0
 Assignee: vinoyang
   Resolution: Fixed

Merged on master with d7edaaa8f28b2a1ae1090077477d994afb7b702f
and on release-1.10 with 8927e9723cb68dbda849c5a38dc674187f980ba3

> Add usage of ProcessFunctionTestHarnesses for testing documentation
> ---
>
> Key: FLINK-15535
> URL: https://issues.apache.org/jira/browse/FLINK-15535
> Project: Flink
>  Issue Type: Wish
>  Components: Documentation
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Recently, we added {{ProcessFunctionTestHarness}} for testing 
> {{ProcessFunction}}. However, except {{ProcessFunctionTestHarnessesTest}} I 
> can not find anything about this test harness in the master codebase.
> Considering {{ProcessFunction}} is the very important and frenquency-used UDF.
> I suggest that we could add a test example in the [testing 
> documentation|https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/testing.html#integration-testing].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15573) Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode as its default charset

2020-01-15 Thread Lsw_aka_laplace (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lsw_aka_laplace updated FLINK-15573:

Description: 
Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which JavaIdentifier can not meet. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really make sense~

Looking forward for any opinion

 

  was:
Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which can not meet JavaIdentifier. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really make sense~

Looking forward for any opinion

 


> Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode  as its 
> default charset  
> -
>
> Key: FLINK-15573
> URL: https://issues.apache.org/jira/browse/FLINK-15573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Lsw_aka_laplace
>Priority: Minor
>
> Now I am talking about the `PlannerExpressionParserImpl`
>     For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
> to UnicodeIdentifier?
>     Currently in my team, we do actually have this problem. For instance, 
> data from Es always contains `@timestamp` field , which JavaIdentifier can 
> not meet. So what we did is just let the fieldRefrence Charset use Unicode
>  
> {code:scala}
>  lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
> rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" 
> + _ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
> Char))) ^^ (.mkString) ) 
>  lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = 
> (STAR | ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
>  
> It is simple but really make sense~
> Looking forward for any opinion
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15598) Memory accuracy loss in YarnClusterDescriptor may lead to deployment failure.

2020-01-15 Thread Andrey Zagrebin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Zagrebin reassigned FLINK-15598:
---

Assignee: Andrey Zagrebin

> Memory accuracy loss in YarnClusterDescriptor may lead to deployment failure.
> -
>
> Key: FLINK-15598
> URL: https://issues.apache.org/jira/browse/FLINK-15598
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Xintong Song
>Assignee: Andrey Zagrebin
>Priority: Blocker
> Fix For: 1.10.0
>
>
> Currently, YarnClusterDescriptor parses/derives TM process memory size from 
> configuration, store it in ClusterSpecification and validate 
> ClusterSpecification, then overwrite the memory size back to configuration.
> This logic is unnecessary. The memory validation is already covered by 
> creating TaskExecutorResourceSpec from configuration in 
> TaskExecutorResourceUtils.
> Moreover, the memory size is stored in MB in ClusterSpecification. The 
> accuracy loss may lead to memory validation failure, which prevent the 
> cluster from being deployed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15535) Add usage of ProcessFunctionTestHarnesses for testing documentation

2020-01-15 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-15535:
---
Issue Type: Bug  (was: Wish)

> Add usage of ProcessFunctionTestHarnesses for testing documentation
> ---
>
> Key: FLINK-15535
> URL: https://issues.apache.org/jira/browse/FLINK-15535
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Recently, we added {{ProcessFunctionTestHarness}} for testing 
> {{ProcessFunction}}. However, except {{ProcessFunctionTestHarnessesTest}} I 
> can not find anything about this test harness in the master codebase.
> Considering {{ProcessFunction}} is the very important and frenquency-used UDF.
> I suggest that we could add a test example in the [testing 
> documentation|https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/testing.html#integration-testing].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15600) Further relax the UDF constraints for Java classes

2020-01-15 Thread Timo Walther (Jira)
Timo Walther created FLINK-15600:


 Summary: Further relax the UDF constraints for Java classes
 Key: FLINK-15600
 URL: https://issues.apache.org/jira/browse/FLINK-15600
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Reporter: Timo Walther
Assignee: Timo Walther


FLINK-12283 already relaxed the UDF constraints for classes which is a big 
usability improvement for Scala users. However, Java users are still facing 
issues when using anonymous inner classes.

We should allow the following:
{code}
tEnv.registerFunction("testi", new ScalarFunction() {
public String eval(Integer i) {
return String.valueOf(i);
}
});
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15535) Add usage of ProcessFunctionTestHarnesses for testing documentation

2020-01-15 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-15535:
---
Affects Version/s: 1.10.0

> Add usage of ProcessFunctionTestHarnesses for testing documentation
> ---
>
> Key: FLINK-15535
> URL: https://issues.apache.org/jira/browse/FLINK-15535
> Project: Flink
>  Issue Type: Wish
>  Components: Documentation
>Affects Versions: 1.10.0
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Recently, we added {{ProcessFunctionTestHarness}} for testing 
> {{ProcessFunction}}. However, except {{ProcessFunctionTestHarnessesTest}} I 
> can not find anything about this test harness in the master codebase.
> Considering {{ProcessFunction}} is the very important and frenquency-used UDF.
> I suggest that we could add a test example in the [testing 
> documentation|https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/testing.html#integration-testing].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-15598) Memory accuracy loss in YarnClusterDescriptor may lead to deployment failure.

2020-01-15 Thread Andrey Zagrebin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Zagrebin reassigned FLINK-15598:
---

Assignee: Xintong Song  (was: Andrey Zagrebin)

> Memory accuracy loss in YarnClusterDescriptor may lead to deployment failure.
> -
>
> Key: FLINK-15598
> URL: https://issues.apache.org/jira/browse/FLINK-15598
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Reporter: Xintong Song
>Assignee: Xintong Song
>Priority: Blocker
> Fix For: 1.10.0
>
>
> Currently, YarnClusterDescriptor parses/derives TM process memory size from 
> configuration, store it in ClusterSpecification and validate 
> ClusterSpecification, then overwrite the memory size back to configuration.
> This logic is unnecessary. The memory validation is already covered by 
> creating TaskExecutorResourceSpec from configuration in 
> TaskExecutorResourceUtils.
> Moreover, the memory size is stored in MB in ClusterSpecification. The 
> accuracy loss may lead to memory validation failure, which prevent the 
> cluster from being deployed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15573) Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode as its default charset

2020-01-15 Thread Lsw_aka_laplace (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lsw_aka_laplace updated FLINK-15573:

Description: 
Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which JavaIdentifier can not meet. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really makes sense~

Looking forward for any opinion

 

  was:
Now I am talking about the `PlannerExpressionParserImpl`

    For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
to UnicodeIdentifier?

    Currently in my team, we do actually have this problem. For instance, data 
from Es always contains `@timestamp` field , which JavaIdentifier can not meet. 
So what we did is just let the fieldRefrence Charset use Unicode

 
{code:scala}
 lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" + 
_ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
Char))) ^^ (.mkString) ) 
 lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = (STAR 
| ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
 

It is simple but really make sense~

Looking forward for any opinion

 


> Let Flink SQL PlannerExpressionParserImpl#FieldRefrence use Unicode  as its 
> default charset  
> -
>
> Key: FLINK-15573
> URL: https://issues.apache.org/jira/browse/FLINK-15573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Lsw_aka_laplace
>Priority: Minor
>
> Now I am talking about the `PlannerExpressionParserImpl`
>     For now  the fieldRefrence‘s  charset is JavaIdentifier,why not change it 
> to UnicodeIdentifier?
>     Currently in my team, we do actually have this problem. For instance, 
> data from Es always contains `@timestamp` field , which JavaIdentifier can 
> not meet. So what we did is just let the fieldRefrence Charset use Unicode
>  
> {code:scala}
>  lazy val extensionIdent: Parser[String] = ( "" ~> // handle whitespace 
> rep1(acceptIf(Character.isUnicodeIdentifierStart)("identifier expected but '" 
> + _ + "' found"), elem("identifier part", Character.isUnicodeIdentifierPart(: 
> Char))) ^^ (.mkString) ) 
>  lazy val fieldReference: PackratParser[UnresolvedReferenceExpression] = 
> (STAR | ident | extensionIdent) ^^ { sym => unresolvedRef(sym) }{code}
>  
> It is simple but really makes sense~
> Looking forward for any opinion
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15599) SQL client requires both legacy and blink planner to be on the classpath

2020-01-15 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015826#comment-17015826
 ] 

Dawid Wysakowicz commented on FLINK-15599:
--

I am not 100% if this is a must, but I think it would be good to be able to run 
the sql-client with only a single planner on the classpath. Otherwise it 
becomes harder to remove the legacy planner in the future.

> SQL client requires both legacy and blink planner to be on the classpath
> 
>
> Key: FLINK-15599
> URL: https://issues.apache.org/jira/browse/FLINK-15599
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
> Fix For: 1.10.0
>
>
> Sql client uses directly some of the internal classes of the legacy planner, 
> thus it does not work with only the blink planner on the classpath.
> The internal class that's being used is 
> {{org.apache.flink.table.functions.FunctionService}}
> This dependency was introduced in FLINK-13195



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] azagrebin commented on a change in pull request #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
azagrebin commented on a change in pull request #10860: [FLINK-15145][config] 
Change TM memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#discussion_r366787995
 
 

 ##
 File path: flink-dist/src/main/resources/flink-conf.yaml
 ##
 @@ -42,9 +42,12 @@ jobmanager.rpc.port: 6123
 jobmanager.heap.size: 1024m
 
 
-# The heap size for the TaskManager JVM
+# The total process memory size for the TaskManager.
+#
+# Note this accounts for all memory usages of a TaskManager process, including 
JVM metaspace and other overheads.
 
 Review comment:
   ```suggestion
   # Note this accounts for all memory usage within the TaskExecutor process, 
including JVM metaspace and other overhead.
   
   taskmanager.memory.process.size: 1568m
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12122) Spread out tasks evenly across all available registered TaskManagers

2020-01-15 Thread huweihua (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015832#comment-17015832
 ] 

huweihua commented on FLINK-12122:
--

[~trohrmann] I have the same issue with [~liuyufei]. 

We run Flink in per-job mode. We have thousands of jobs that need to be 
upgraded to Flink 1.9 from Flink 1.5. the change of scheduling strategy cause 
load balance issue. This blocked our upgrade plan.
In addition to the load balance issue, we also encountered other issues caused 
by Flink 1.9 scheduling strategy. # Network bandwidth. Tasks of the same type 
are scheduled to the one TaskManager, causing too much network traffic on the 
machine.

 # Some jobs need to sink to the local agent. After centralized scheduling, the 
insufficient processing capacity of the single machine causes a backlog of 
consumption.

I think decentralized scheduling strategy is reasonable. 

> Spread out tasks evenly across all available registered TaskManagers
> 
>
> Key: FLINK-12122
> URL: https://issues.apache.org/jira/browse/FLINK-12122
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.6.4, 1.7.2, 1.8.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
> Attachments: image-2019-05-21-12-28-29-538.png, 
> image-2019-05-21-13-02-50-251.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> With Flip-6, we changed the default behaviour how slots are assigned to 
> {{TaskManages}}. Instead of evenly spreading it out over all registered 
> {{TaskManagers}}, we randomly pick slots from {{TaskManagers}} with a 
> tendency to first fill up a TM before using another one. This is a regression 
> wrt the pre Flip-6 code.
> I suggest to change the behaviour so that we try to evenly distribute slots 
> across all available {{TaskManagers}} by considering how many of their slots 
> are already allocated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] azagrebin commented on a change in pull request #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
azagrebin commented on a change in pull request #10860: [FLINK-15145][config] 
Change TM memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#discussion_r366795773
 
 

 ##
 File path: flink-dist/src/main/resources/flink-conf.yaml
 ##
 @@ -42,9 +42,12 @@ jobmanager.rpc.port: 6123
 jobmanager.heap.size: 1024m
 
 
-# The heap size for the TaskManager JVM
+# The total process memory size for the TaskManager.
+#
+# Note this accounts for all memory usages of a TaskManager process, including 
JVM metaspace and other overheads.
+# To exclude JVM metaspace and other overheads, please use total flink memory 
size (taskmanager.memory.flink.size) instead.
 
-taskmanager.memory.process.size: 1024m
+taskmanager.memory.process.size: 1568m
 
 Review comment:
   ```suggestion
   
   # To exclude JVM metaspace and overhead, please, use total Flink memory size 
instead of 'taskmanager.memory.process.size'.
   # It is not recommended to set both 'taskmanager.memory.process.size' and 
Flink memory.
   #
   # taskmanager.memory.flink.size: 1280
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10845: [FLINK-15355][plugins]Classloader restricts access to parent to whitelist.

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10845: [FLINK-15355][plugins]Classloader 
restricts access to parent to whitelist.
URL: https://github.com/apache/flink/pull/10845#issuecomment-573736294
 
 
   
   ## CI report:
   
   * 0942cb8b913ba50ecf8d7ca28832c4c92bf78e6c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144178023) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4303)
 
   * ce62e6539d1394a5d27d8ef51db010104852e433 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/144207965) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4304)
 
   * 76bd16a406b4bbbee5d4189b66f1fdd36f98798f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144384071) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4339)
 
   * 726f81492a05cc47657a9c30caf2e397e8c1bd02 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144404404) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4343)
 
   * 18934ef582e5ba278289541d8602b081c4f63367 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/14448) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4345)
 
   * ebd3076a0b24751eca2dce1d3faf646b56fe6426 UNKNOWN
   * ac9c94daa036cec2cb2bb2d53890e67594046479 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144479827) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4360)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10861: [FLINK-15558][Connector] Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10861: [FLINK-15558][Connector] Bump 
Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector
URL: https://github.com/apache/flink/pull/10861#issuecomment-574591161
 
 
   
   ## CI report:
   
   * f4506ad0e9302a279940670fa90c305c240b3c12 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144485388) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4364)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10862: [FLINK-15597][runtime] Relax sanity check of JVM memory overhead to be within its min/max

2020-01-15 Thread GitBox
flinkbot commented on issue #10862: [FLINK-15597][runtime] Relax sanity check 
of JVM memory overhead to be within its min/max
URL: https://github.com/apache/flink/pull/10862#issuecomment-574606585
 
 
   
   ## CI report:
   
   * 9e5e952e448571968cca13475d221ce4a62689db UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch scheduling tests for both LegacyScheduler and DefaultScheduler

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10858: [FLINK-15582][tests] Enable batch 
scheduling tests for both LegacyScheduler and DefaultScheduler
URL: https://github.com/apache/flink/pull/10858#issuecomment-574561465
 
 
   
   ## CI report:
   
   * cf716ecc2f23aa89683be8096f16725b1f1f8d26 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/144472558) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4359)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10860: [FLINK-15145][config] Change TM memory configuration default values for FLIP-49.

2020-01-15 Thread GitBox
flinkbot edited a comment on issue #10860: [FLINK-15145][config] Change TM 
memory configuration default values for FLIP-49.
URL: https://github.com/apache/flink/pull/10860#issuecomment-574591065
 
 
   
   ## CI report:
   
   * 5f9cd2224ef458e5068f3179aa5be025848f8723 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/144485357) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4363)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15549) integer overflow in SpillingResettableMutableObjectIterator

2020-01-15 Thread Aljoscha Krettek (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015838#comment-17015838
 ] 

Aljoscha Krettek commented on FLINK-15549:
--

Yes, this is a problem. [~pnowojski] Are you already tracking this? I noticed 
you assigned [~caojian0613]

> integer overflow in SpillingResettableMutableObjectIterator
> ---
>
> Key: FLINK-15549
> URL: https://issues.apache.org/jira/browse/FLINK-15549
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet
>Affects Versions: 1.6.4, 1.7.2, 1.8.3, 1.9.1, 1.10.0
>Reporter: caojian0613
>Assignee: caojian0613
>Priority: Major
>  Labels: overflow
>
> The SpillingResettableMutableObjectIterator has a data overflow problem if 
> the number of elements in a single input exceeds Integer.MAX_VALUE.
> The reason is inside the SpillingResettableMutableObjectIterator, it track 
> the total number of elements and the number of elements currently read with 
> two int type fileds (elementCount and currentElementNum), and if the number 
> of elements exceeds Integer.MAX_VALUE, it will overflow.
> If there is an overflow, then in the next iteration, after reset the input , 
> the data will not be read or only part of the data will be read.
> Therefore, we should changing the type of these two fields of 
> SpillingResettableIterator* from int to long, and we also need a pre-check 
> mechanism before such numerical.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zentol commented on a change in pull request #10801: [FLINK-15521][e2e] Follow symbolic link when copying distribution

2020-01-15 Thread GitBox
zentol commented on a change in pull request #10801: [FLINK-15521][e2e] Follow 
symbolic link when copying distribution
URL: https://github.com/apache/flink/pull/10801#discussion_r366814124
 
 

 ##
 File path: 
flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/TestUtils.java
 ##
 @@ -87,15 +89,15 @@ public static Path getResourceJar(final String 
jarNameRegex) {
 * @throws IOException if any IO error happen.
 */
public static Path copyDirectory(final Path source, final Path 
destination) throws IOException {
-   Files.walkFileTree(source, new SimpleFileVisitor() {
+   Files.walkFileTree(source, 
EnumSet.of(FileVisitOption.FOLLOW_LINKS), Integer.MAX_VALUE, new 
SimpleFileVisitor() {
@Override
public FileVisitResult preVisitDirectory(Path dir, 
BasicFileAttributes ignored)
throws IOException {
-   final Path targetRir = 
destination.resolve(source.relativize(dir));
+   final Path targetDir = 
destination.resolve(source.relativize(dir));
try {
-   Files.copy(dir, targetRir, 
StandardCopyOption.COPY_ATTRIBUTES);
+   Files.copy(dir, targetDir, 
StandardCopyOption.COPY_ATTRIBUTES);
} catch (FileAlreadyExistsException e) {
-   if (!Files.isDirectory(targetRir)) {
+   if (!Files.isDirectory(targetDir)) {
 
 Review comment:
   I have to guess since it's been a while, but I suppose the idea is that if a 
file already exists it is likely that the contents are different (hence 
affecting the correctness of the operation), while this is not the case for 
directories generally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >