[jira] [Updated] (FLINK-16847) Support timestamp types in vectorized Python UDF

2020-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16847:
---
Labels: pull-request-available  (was: )

> Support timestamp types in vectorized Python UDF
> 
>
> Key: FLINK-16847
> URL: https://issues.apache.org/jira/browse/FLINK-16847
> Project: Flink
>  Issue Type: Task
>  Components: API / Python
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu opened a new pull request #11556: [FLINK-16847][python] Support timestamp types in vectorized Python UDF

2020-03-28 Thread GitBox
dianfu opened a new pull request #11556: [FLINK-16847][python] Support 
timestamp types in vectorized Python UDF
URL: https://github.com/apache/flink/pull/11556
 
 
   
   ## What is the purpose of the change
   
   *This pull request adds support of LocalZonedTimestampType and TimestampType 
in vectorized Python UDF.*
   
   ## Brief change log
   
 - *Add support of LocalZonedTimestampType in vectorized Python UDF*
 - *Add support of TimestampType in vectorized Python UDF*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Java tests ArrowUtilsTest, BaseRowArrowReaderWriterTest and 
RowArrowReaderWriterTest.*
 - *Python tests test_pandas_udf.py*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch deduplication lack state TTL bug fix

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch 
deduplication lack state TTL bug fix
URL: https://github.com/apache/flink/pull/11482#issuecomment-60726
 
 
   
   ## CI report:
   
   * 54a34b1dcbcaff049bed0cd11033711e44c38ccf Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/156127302) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6783)
 
   * f6bab1e9ea3eff8eaa5eab70e8fcbf2ec34d5276 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16847) Support timestamp types in vectorized Python UDF

2020-03-28 Thread Dian Fu (Jira)
Dian Fu created FLINK-16847:
---

 Summary: Support timestamp types in vectorized Python UDF
 Key: FLINK-16847
 URL: https://issues.apache.org/jira/browse/FLINK-16847
 Project: Flink
  Issue Type: Task
  Components: API / Python
Reporter: Dian Fu
Assignee: Dian Fu
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai closed pull request #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
tzulitai closed pull request #79: [FLINK-16730][docs] Add walkthrough 
distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai edited a comment on issue #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
tzulitai edited a comment on issue #79: [FLINK-16730][docs] Add walkthrough 
distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79#issuecomment-605554070
 
 
   After some time thinking about this, I think we probably shouldn’t rush 
adding the zip dist building as a Maven build step.
   
   The problem is: every time you do a `mvn clean package`, it generates a git 
diff with the regenerated zip file, even if the contents are actually 
identical. It’ll be a bit confusing.
   
   We could add that build step behind a Maven profile that developers have to 
enable to build the doc walkthrough dists, but that’s almost the same work for 
devs as just manually executing the script.
   
   For now, lets proceed like so:
   - Only generate the wallthrough zip in the docs buildbot (using the new 
`docs/download/create-walkthrough.sh`) and have that published with the docs. 
That’s the only time it’ll really be needed to be checked for an update, 
anyways. When building the docs locally, that download link will be broken, but 
I think that’s fine for now.
   - That means I’ll be removing the Maven build step + zip from the repo + 
remain excluded in the .gitignore.
   - Lets revisit this once we realize we need more and more Python zip dists 
in the docs
   
   Sorry for the back and forth on the solution, but after some thinking I 
think this is the safest approach to take first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai edited a comment on issue #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
tzulitai edited a comment on issue #79: [FLINK-16730][docs] Add walkthrough 
distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79#issuecomment-605554070
 
 
   After some time thinking about this, I think we probably shouldn’t rush 
adding the zip dist building as a Maven build step.
   
   The problem is: every time you do a `mvn clean package`, it generates a git 
diff with the regenerated zip file, even if the contents are actually 
identical. It’ll be a bit confusing.
   
   We could add that build step behind a Maven profile that developers have to 
enable to build the doc walkthrough dists, but that’s almost the same work for 
devs as just manually executing the script.
   
   For now, lets proceed like so:
   - Only generate the wallthrough zip in the docs buildbot (using the old 
`docs/download/build-walkthrough.sh`) and have that published with the docs. 
That’s the only time it’ll really be needed to be checked for an update, 
anyways. When building the docs locally, that download link will be broken, but 
I think that’s fine for now.
   - That means I’ll be removing the Maven build step + zip from the repo + 
remain excluded in the .gitignore.
   - Lets revisit this once we realize we need more and more Python zip dists 
in the docs
   
   Sorry for the back and forth on the solution, but after some thinking I 
think this is the safest approach to take first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai edited a comment on issue #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
tzulitai edited a comment on issue #79: [FLINK-16730][docs] Add walkthrough 
distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79#issuecomment-605554070
 
 
   After some time thinking about this, I think we probably shouldn’t rush 
adding the zip dist building as a Maven build step.
   
   The problem is: every time you do a `mvn clean package`, it generates a git 
diff with the regenerated zip file, even if the contents are actually 
identical. It’ll be a bit confusing.
   
   We could add that build step behind a Maven profile that developers have to 
enable to build the doc walkthrough dists, but that’s almost the same work for 
devs as just manually executing the script.
   
   For now, lets proceed like so:
   - Only generate the wallthrough zip in the docs buildbot (using the new 
`tools/docs/create_python_walkthrough.sh`) and have that published with the 
docs. That’s the only time it’ll really be needed to be checked for an update, 
anyways. When building the docs locally, that download link will be broken, but 
I think that’s fine for now.
   - That means I’ll be removing the Maven build step + zip from the repo + 
remain excluded in the .gitignore.
   - Lets revisit this once we realize we need more and more Python zip dists 
in the docs
   
   Sorry for the back and forth on the solution, but after some thinking I 
think this is the safest approach to take first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on issue #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
tzulitai commented on issue #79: [FLINK-16730][docs] Add walkthrough 
distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79#issuecomment-605554070
 
 
   After some time thinking about this, I think we probably shouldn’t rush 
adding the zip dist building as a Maven build step.
   
   The problem is: every time you do a `mvn clean package`, it generates a git 
diff with the regenerated zip file, even if the contents are actually 
identical. It’ll be a bit confusing.
   
   We could add that build step behind a Maven profile that developers have to 
enable to build the doc walkthrough dists, but that’s almost the same work for 
devs as just manually executing the script.
   
   For now, lets proceed like so:
   - Only generate the wallthrough zip in the docs buildbot and have that 
published with the docs. That’s the only time it’ll really be needed to be 
checked for an update, anyways. When building the docs locally, that download 
link will be broken, but I think that’s fine for now.
   - That means I’ll be removing the Maven build step + zip from the repo + 
remain excluded in the .gitignore.
   - Lets revisit this once we realize we need more and more Python zip dists 
in the docs
   
   Sorry for the back and forth on the solution, but after some thinking I 
think this is the safest approach to take first.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-16489) Use flink on yarn,RM restart AM,but the flink job is not restart from the saved checkpoint.

2020-03-28 Thread liangji (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liangji closed FLINK-16489.
---
Resolution: Not A Problem

> Use flink on yarn,RM restart AM,but the flink job is not restart from the 
> saved checkpoint.
> ---
>
> Key: FLINK-16489
> URL: https://issues.apache.org/jira/browse/FLINK-16489
> Project: Flink
>  Issue Type: Bug
>Reporter: liangji
>Priority: Major
> Attachments: image-2020-03-09-18-06-59-710.png, 
> image-2020-03-25-23-09-06-230.png
>
>
> 1. Environment
> a. flink-1.9.0
> b. yarn version
> Hadoop 2.6.0-cdh5.5.0
>  Subversion [http://github.com/cloudera/hadoop] -r 
> fd21232cef7b8c1f536965897ce20f50b83ee7b2
>  Compiled by jenkins on 2015-11-09T20:39Z
>  Compiled with protoc 2.5.0
>  From source with checksum 98e07176d1787150a6a9c087627562c
>  This command was run using 
> /opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/hadoop-common-2.6.0-cdh5.5.0.jar
> c. we enable flink checkpoint and use default configuration for flink 
> checkpoint 
> 2. Problem repetition
> a. Make AM run in node1;
> b. Do NM decomission for node1
> 3. Problem
> !image-2020-03-09-18-06-59-710.png!
> We can see form the pic above, last AM saved chk-1522 at 2020-03-04 14:12:48. 
> Then the second AM restarted with chk-1. But at last, we find data is not 
> correct. So we restarted the application from chk-1522 manually with flink 
> cli -s, then we confirmed the data is right.
> Do as above, we find that AM restarted, but the flink job is not restart from 
> the saved checkpoint.So is it normal or are there some configurations that I 
> have not configed?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-16489) Use flink on yarn,RM restart AM,but the flink job is not restart from the saved checkpoint.

2020-03-28 Thread liangji (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liangji updated FLINK-16489:

Comment: was deleted

(was: reopen because of the AM attempt exceeds the num of 
{{yarn.resourcemanager.am.max-attempts=2}})

> Use flink on yarn,RM restart AM,but the flink job is not restart from the 
> saved checkpoint.
> ---
>
> Key: FLINK-16489
> URL: https://issues.apache.org/jira/browse/FLINK-16489
> Project: Flink
>  Issue Type: Bug
>Reporter: liangji
>Priority: Major
> Attachments: image-2020-03-09-18-06-59-710.png, 
> image-2020-03-25-23-09-06-230.png
>
>
> 1. Environment
> a. flink-1.9.0
> b. yarn version
> Hadoop 2.6.0-cdh5.5.0
>  Subversion [http://github.com/cloudera/hadoop] -r 
> fd21232cef7b8c1f536965897ce20f50b83ee7b2
>  Compiled by jenkins on 2015-11-09T20:39Z
>  Compiled with protoc 2.5.0
>  From source with checksum 98e07176d1787150a6a9c087627562c
>  This command was run using 
> /opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/hadoop-common-2.6.0-cdh5.5.0.jar
> c. we enable flink checkpoint and use default configuration for flink 
> checkpoint 
> 2. Problem repetition
> a. Make AM run in node1;
> b. Do NM decomission for node1
> 3. Problem
> !image-2020-03-09-18-06-59-710.png!
> We can see form the pic above, last AM saved chk-1522 at 2020-03-04 14:12:48. 
> Then the second AM restarted with chk-1. But at last, we find data is not 
> correct. So we restarted the application from chk-1522 manually with flink 
> cli -s, then we confirmed the data is right.
> Do as above, we find that AM restarted, but the flink job is not restart from 
> the saved checkpoint.So is it normal or are there some configurations that I 
> have not configed?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-16489) Use flink on yarn,RM restart AM,but the flink job is not restart from the saved checkpoint.

2020-03-28 Thread liangji (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liangji updated FLINK-16489:

Comment: was deleted

(was: Additional,I have another question. When I killed JM progress of 5 times, 
AM would restart the same times.  It exceed the num of config: 
{{yarn.resourcemanager.am.max-attempts=2,and use default config for 
yarn.application-attempts.Is it because of the AM Preemption?}}

!image-2020-03-25-23-09-06-230.png!)

> Use flink on yarn,RM restart AM,but the flink job is not restart from the 
> saved checkpoint.
> ---
>
> Key: FLINK-16489
> URL: https://issues.apache.org/jira/browse/FLINK-16489
> Project: Flink
>  Issue Type: Bug
>Reporter: liangji
>Priority: Major
> Attachments: image-2020-03-09-18-06-59-710.png, 
> image-2020-03-25-23-09-06-230.png
>
>
> 1. Environment
> a. flink-1.9.0
> b. yarn version
> Hadoop 2.6.0-cdh5.5.0
>  Subversion [http://github.com/cloudera/hadoop] -r 
> fd21232cef7b8c1f536965897ce20f50b83ee7b2
>  Compiled by jenkins on 2015-11-09T20:39Z
>  Compiled with protoc 2.5.0
>  From source with checksum 98e07176d1787150a6a9c087627562c
>  This command was run using 
> /opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/jars/hadoop-common-2.6.0-cdh5.5.0.jar
> c. we enable flink checkpoint and use default configuration for flink 
> checkpoint 
> 2. Problem repetition
> a. Make AM run in node1;
> b. Do NM decomission for node1
> 3. Problem
> !image-2020-03-09-18-06-59-710.png!
> We can see form the pic above, last AM saved chk-1522 at 2020-03-04 14:12:48. 
> Then the second AM restarted with chk-1. But at last, we find data is not 
> correct. So we restarted the application from chk-1522 manually with flink 
> cli -s, then we confirmed the data is right.
> Do as above, we find that AM restarted, but the flink job is not restart from 
> the saved checkpoint.So is it normal or are there some configurations that I 
> have not configed?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on issue #11544: [FLINK-16822] [sql-client] `table.xx` property set from CLI should also be set into SessionState's TableConfig

2020-03-28 Thread GitBox
godfreyhe commented on issue #11544: [FLINK-16822] [sql-client] `table.xx` 
property set from CLI should also be set into SessionState's TableConfig
URL: https://github.com/apache/flink/pull/11544#issuecomment-605552309
 
 
   > Can this also fix runtime environment configurations? Currently, SET 
command can't set a runtime configuration because we ignore them in 
`Environment#enrich`, e.g. `SET pipeline.object-reuse=true` doesn't work.
   
   nope, I think it's a new feature. Currently only few runtime configuration 
are supported, I think we can create a jira to support it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #11544: [FLINK-16822] [sql-client] `table.xx` property set from CLI should also be set into SessionState's TableConfig

2020-03-28 Thread GitBox
godfreyhe commented on a change in pull request #11544: [FLINK-16822] 
[sql-client] `table.xx` property set from CLI should also be set into 
SessionState's TableConfig
URL: https://github.com/apache/flink/pull/11544#discussion_r399738426
 
 

 ##
 File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
 ##
 @@ -281,13 +283,18 @@ public void setSessionProperty(String sessionId, String 
key, String value) throw
ExecutionContext context = getExecutionContext(sessionId);
Environment env = context.getEnvironment();
Environment newEnv = Environment.enrich(env, 
ImmutableMap.of(key, value), ImmutableMap.of());
+   ExecutionContext.SessionState sessionState = 
context.getSessionState();
+   // update table config
+   newEnv.getConfiguration().asMap().forEach((k, v) ->
+   
sessionState.config.getConfiguration().setString(k, v));
 
 Review comment:
   good catch, we should store the original `SessionState` for resetting all 
properties to initial state.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
tzulitai commented on a change in pull request #79: [FLINK-16730][docs] Add 
walkthrough distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79#discussion_r399734879
 
 

 ##
 File path: statefun-examples/statefun-python-greeter/pom.xml
 ##
 @@ -0,0 +1,55 @@
+
+
+http://www.w3.org/2001/XMLSchema-instance; 
xmlns="http://maven.apache.org/POM/4.0.0;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+
+4.0.0
+
+
+statefun-parent
 
 Review comment:
   this should be `statefun-examples`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
tzulitai commented on a change in pull request #79: [FLINK-16730][docs] Add 
walkthrough distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79#discussion_r399735561
 
 

 ##
 File path: tools/releasing/update_branch_version.sh
 ##
 @@ -77,6 +77,9 @@ perl -pi -e "s#version_title: 
\"$OLD_VERSION\"#version_title: \"$NEW_VERSION\"#"
 find . -name 'Dockerfile*' -type f -exec perl -pi -e "s#FROM 
flink-statefun:$OLD_VERSION#FROM flink-statefun:$NEW_VERSION#" {} \;
 perl -pi -e "s#VERSION_TAG=$OLD_VERSION#VERSION_TAG=$NEW_VERSION#" 
tools/docker/build-stateful-functions.sh
 
+# Rebuild python walkthrough dist with updated versions
+./../docs/create_python_walkthrough.sh
 
 Review comment:
   This should be the relative path from project root:
   ```suggestion
   tools/docs/create_python_walkthrough.sh
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is 
very slow in large parallelism
URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858
 
 
   
   ## CI report:
   
   * ce87022bb654cbee5e5ac8f31c6e72dc03b89c9f Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/156278518) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6789)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is 
very slow in large parallelism
URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858
 
 
   
   ## CI report:
   
   * ce87022bb654cbee5e5ac8f31c6e72dc03b89c9f Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156278518) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6789)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is 
very slow in large parallelism
URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858
 
 
   
   ## CI report:
   
   * 545fd6194256f5daefac1e0a6fc2d7fb349c7aa6 Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/156250357) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6787)
 
   * ce87022bb654cbee5e5ac8f31c6e72dc03b89c9f Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156278518) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6789)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is 
very slow in large parallelism
URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858
 
 
   
   ## CI report:
   
   * 545fd6194256f5daefac1e0a6fc2d7fb349c7aa6 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156250357) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6787)
 
   * ce87022bb654cbee5e5ac8f31c6e72dc03b89c9f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] faaronzheng commented on a change in pull request #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism

2020-03-28 Thread GitBox
faaronzheng commented on a change in pull request #11356: [FLINK-15000] [web] 
WebUI metrics is very slow in large parallelism
URL: https://github.com/apache/flink/pull/11356#discussion_r399729007
 
 

 ##
 File path: 
flink-runtime-web/web-dashboard/src/app/pages/job/overview/chart/job-overview-drawer-chart.component.ts
 ##
 @@ -64,15 +71,35 @@ export class JobOverviewDrawerChartComponent implements 
OnInit, OnDestroy {
   closeMetric(metric: string) {
 this.listOfSelectedMetric = this.listOfSelectedMetric.filter(item => item 
!== metric);
 this.jobService.metricsCacheMap.set(this.cacheMetricKey, 
this.listOfSelectedMetric);
+this.showList = [metric,...this.showList];
 this.updateUnselectedMetricList();
   }
 
   updateUnselectedMetricList() {
 this.listOfUnselectedMetric = this.listOfMetricName.filter(item => 
this.listOfSelectedMetric.indexOf(item) === -1);
+this.showList = this.showList.filter(item => 
this.listOfSelectedMetric.indexOf(item) === -1);
+this.optionList = this.listOfUnselectedMetric;
   }
 
   constructor(private metricsService: MetricsService, private jobService: 
JobService, private cdr: ChangeDetectorRef) {}
 
+  nzOnSearch(val: string){
+this.showListPageNum=0;
 
 Review comment:
   thanks, the code is already formatted by pre-commit hooks. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is 
very slow in large parallelism
URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858
 
 
   
   ## CI report:
   
   * 8917c0690a4d84ac1ca15a4ca4f68af921f8bdc1 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152561426) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6097)
 
   * 545fd6194256f5daefac1e0a6fc2d7fb349c7aa6 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156250357) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6787)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is very slow in large parallelism

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11356: [FLINK-15000] [web] WebUI metrics is 
very slow in large parallelism
URL: https://github.com/apache/flink/pull/11356#issuecomment-596856858
 
 
   
   ## CI report:
   
   * 8917c0690a4d84ac1ca15a4ca4f68af921f8bdc1 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/152561426) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6097)
 
   * 545fd6194256f5daefac1e0a6fc2d7fb349c7aa6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-16846) Add python docker images

2020-03-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-16846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070016#comment-17070016
 ] 

Ismaël Mejía edited comment on FLINK-16846 at 3/28/20, 9:16 PM:


[~plucas] WDYT is the best approach to inherit from the current images and add 
the specific python version (3.5, 3.6, ...)? or just to add python by default? 
I will write a message to the ML to see what others think.


was (Author: iemejia):
[~plucas] WDYT? I will write a message to the ML to see what other think.

> Add python docker images
> 
>
> Key: FLINK-16846
> URL: https://issues.apache.org/jira/browse/FLINK-16846
> Project: Flink
>  Issue Type: Improvement
>  Components: Release System / Docker
>Reporter: Ismaël Mejía
>Priority: Major
>
> We do not include python currently in the docker images. This issue is to 
> include it or create derived python specific images.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16846) Add python docker images

2020-03-28 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-16846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismaël Mejía updated FLINK-16846:
-
Summary: Add python docker images  (was: Add docker images with python)

> Add python docker images
> 
>
> Key: FLINK-16846
> URL: https://issues.apache.org/jira/browse/FLINK-16846
> Project: Flink
>  Issue Type: Improvement
>  Components: Release System / Docker
>Reporter: Ismaël Mejía
>Priority: Major
>
> We do not include python currently in the docker images. This issue is to 
> include it or create derived python specific images.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16773) Flink 1.10 test execution is broken due to premature test cluster shutdown

2020-03-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-16773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070020#comment-17070020
 ] 

Ismaël Mejía commented on FLINK-16773:
--

Was this bug intended to be created on the Beam issue tracker [~mxm] ?

> Flink 1.10 test execution is broken due to premature test cluster shutdown
> --
>
> Key: FLINK-16773
> URL: https://issues.apache.org/jira/browse/FLINK-16773
> Project: Flink
>  Issue Type: Bug
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Critical
>
> Due to a race condition with the test cluster shutdown code, tests may fail 
> because the Flink job result cannot be retrieved when the cluster already has 
> been shut down. This is a Flink 1.10.0 bug which is addressed upstream via 
> FLINK-16705. 
> If this doesn't get addressed upstream, we may also be able to work around 
> this. 
> {noformat}
> java.lang.RuntimeException: Pipeline execution failed
>   at org.apache.beam.runners.flink.FlinkRunner.run(FlinkRunner.java:115)
>   at 
> org.apache.beam.runners.flink.TestFlinkRunner.run(TestFlinkRunner.java:61)
>   at org.apache.beam.sdk.Pipeline.run(Pipeline.java:317)
>   at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:350)
>   at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:331)
>   at 
> org.apache.beam.sdk.transforms.ViewTest.testMultimapSideInput(ViewTest.java:543)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:319)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:266)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>  

[jira] [Commented] (FLINK-16846) Add docker images with python

2020-03-28 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-16846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070016#comment-17070016
 ] 

Ismaël Mejía commented on FLINK-16846:
--

[~plucas] WDYT? I will write a message to the ML to see what other think.

> Add docker images with python
> -
>
> Key: FLINK-16846
> URL: https://issues.apache.org/jira/browse/FLINK-16846
> Project: Flink
>  Issue Type: Improvement
>  Components: Release System / Docker
>Reporter: Ismaël Mejía
>Priority: Major
>
> We do not include python currently in the docker images. This issue is to 
> include it or create derived python specific images.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16846) Add docker images with python

2020-03-28 Thread Jira
Ismaël Mejía created FLINK-16846:


 Summary: Add docker images with python
 Key: FLINK-16846
 URL: https://issues.apache.org/jira/browse/FLINK-16846
 Project: Flink
  Issue Type: Improvement
  Components: Release System / Docker
Reporter: Ismaël Mejía


We do not include python currently in the docker images. This issue is to 
include it or create derived python specific images.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11555: [FLINK-16576][state backends] Correct the logic of KeyGroupStateHandle#getIntersection

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11555: [FLINK-16576][state backends] 
Correct the logic of KeyGroupStateHandle#getIntersection
URL: https://github.com/apache/flink/pull/11555#issuecomment-605488014
 
 
   
   ## CI report:
   
   * c5b4e3f0e6267ce5f6bbcd1c920516a2f0f420f9 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/156142155) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6786)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11555: [FLINK-16576][state backends] Correct the logic of KeyGroupStateHandle#getIntersection

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11555: [FLINK-16576][state backends] 
Correct the logic of KeyGroupStateHandle#getIntersection
URL: https://github.com/apache/flink/pull/11555#issuecomment-605488014
 
 
   
   ## CI report:
   
   * c5b4e3f0e6267ce5f6bbcd1c920516a2f0f420f9 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156142155) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6786)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16260) Add docker images based on Java 11

2020-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16260:
---
Labels: pull-request-available  (was: )

> Add docker images based on Java 11
> --
>
> Key: FLINK-16260
> URL: https://issues.apache.org/jira/browse/FLINK-16260
> Project: Flink
>  Issue Type: New Feature
>  Components: Release System / Docker
>Reporter: Ismaël Mejía
>Assignee: Ismaël Mejía
>Priority: Major
>  Labels: pull-request-available
>
> Since 1.10.0 supports Java 11, we can add a version of the docker image based 
> on Java 11
> Feature [requested in our old issue 
> tracker|https://github.com/docker-flink/docker-flink/issues/97] and moved here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-docker] iemejia opened a new pull request #9: [FLINK-16260][docker] Add docker images based on Java 11

2020-03-28 Thread GitBox
iemejia opened a new pull request #9: [FLINK-16260][docker] Add docker images 
based on Java 11
URL: https://github.com/apache/flink-docker/pull/9
 
 
   I have some doubts now that we are back to the Apache Flink repo.
   - Do we need an additional Flink committer in place to merge/publish this 
Patrick? If so maybe it would be a good idea that you and/or i become one.
   - Would an intermediary release like this require a vote in the mailing list 
too?
   
   R: @patricklucas 
   CC: @rmetzger  (in case we need a committer blessing)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] sjwiesman opened a new pull request #79: [FLINK-16730][docs] Add walkthrough distribution as build step

2020-03-28 Thread GitBox
sjwiesman opened a new pull request #79: [FLINK-16730][docs] Add walkthrough 
distribution as build step
URL: https://github.com/apache/flink-statefun/pull/79
 
 
   The python walkthrough is based on the python greeter example and includes 
skeleton code to quickly get started. Python does not have an equivalent to 
maven archetype so I wanted to provide a zip download.
   
   I just realized that the link on master and rc3 is broken because the 
.gitignore does not allow zip files. I'm also don't think we can include the 
zip due to requirements not to ship binary files (but please correct me if I'm 
wrong here).
   
   The best solution I could come up with was a bash file 
`./downloads/copy-walkthrough-distribution.sh` the copies the 
`statefun-examples/statefun-greeter-python` and modifies certain files, and 
then a Jekyll plugin that creates the zip file as a build step. 
   
   I am happy for suggestions if someone has a better idea of how to do this. 
   
   cc @tzulitai 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-16838) Stateful Functions Quickstart archetype Dockerfile should reference a specific version tag

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-16838.
---
Fix Version/s: statefun-2.0
   Resolution: Fixed

Fixed.

master via 4750c144bbe6bd76a075c0d69c402785286eedb5
release-2.0 via 1a98294f6ba1713b11c43ac3eb534974439ae56d

> Stateful Functions Quickstart archetype Dockerfile should reference a 
> specific version tag
> --
>
> Key: FLINK-16838
> URL: https://issues.apache.org/jira/browse/FLINK-16838
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Critical
>  Labels: pull-request-available
> Fix For: statefun-2.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, the quickstart archetype provides a skeleton Dockerfile that 
> always builds on top of the latest image:
> {code}
> FROM statefun
> {code}
> While it happens to work for the first ever release since the {{latest}} tag 
> will (coincidentally) point to the correct version,
> once we have multiple releases this will no longer be correct.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16842) Ridesharing example simulator built artifact is missing NOTICE / LICENSE for bundled dependencies

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-16842.
---
Resolution: Fixed

Fixed.

master - 6f6476cb99f9e70925c2b483c69eaca18fb52cc0
release-2.0 - 2b77e200df3595101deba6a6f7c2d95f02f3735f

> Ridesharing example simulator built artifact is missing NOTICE / LICENSE for 
> bundled dependencies
> -
>
> Key: FLINK-16842
> URL: https://issues.apache.org/jira/browse/FLINK-16842
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-2.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: statefun-2.0
>
>
> The {{statefun-ridesharing-example-simulator}} artifact bundles 
> {{spring-boot}} as a dependency, which in turn pulls in some other 
> dependencies that are non-ASLv2.
> We should add NOTICE / LICENSE files to the built artifact for those.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16841) Stateful Function artifacts jars should not bundle proto sources

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-16841.
---
Resolution: Fixed

Fixed -

master: e72eb732ba6403ab915ab73570a9d96431254fa9
release-2.0: 621a6eefbd3a06321521cea9b456d6be39a26cbd

> Stateful Function artifacts jars should not bundle proto sources
> 
>
> Key: FLINK-16841
> URL: https://issues.apache.org/jira/browse/FLINK-16841
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Stateful Functions
>Affects Versions: statefun-2.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: statefun-2.0
>
>
> These protobuf definition files are being bundled in built artifacts:
> {code}
> google/protobuf/any.proto
> google/protobuf/api.proto
> google/protobuf/descriptor.proto
> google/protobuf/duration.proto
> google/protobuf/empty.proto
> google/protobuf/field_mask.proto
> google/protobuf/source_context.proto
> google/protobuf/struct.proto
> google/protobuf/timestamp.proto
> google/protobuf/type.proto
> google/protobuf/wrappers.proto
> {code}
> This is caused by the {{addProtoSources}} configuration of the 
> {{protoc-jar-maven-plugin}}.
> We should remove those, because:
> - Bundling those will require licensing acknowledgement to Protobuf in our 
> artifacts.
> - Those definition files are not used directly by Stateful Functions at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16843) Python SDK distribution is missing LICENSE and NOTICE files

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-16843.
---
Resolution: Fixed

Fixed.

master - a06db812d3f4c153886df97e31d96d9534b1ca67
release-2.0: c0934bbe53723d4e955d32bfc7da73f92681cffd

> Python SDK distribution is missing LICENSE and NOTICE files
> ---
>
> Key: FLINK-16843
> URL: https://issues.apache.org/jira/browse/FLINK-16843
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Affects Versions: statefun-2.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: statefun-2.0
>
>
> The Python SDK distributions for Stateful Functions do not bundle any LICENSE 
> or NOTICE files.
> This should be fixed, as these are required to be included in all 
> ASF-released distributions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16730) Python SDK Getting Started

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069961#comment-17069961
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-16730:
-

Resolved in release-2.0 with 6de1987ad8cf2e3511a762f4889554059d9067a0

> Python SDK Getting Started
> --
>
> Key: FLINK-16730
> URL: https://issues.apache.org/jira/browse/FLINK-16730
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We should add a python specific version of walkthrough for users to quickly 
> get started. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11555: [FLINK-16576][state backends] Correct the logic of KeyGroupStateHandle#getIntersection

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11555: [FLINK-16576][state backends] 
Correct the logic of KeyGroupStateHandle#getIntersection
URL: https://github.com/apache/flink/pull/11555#issuecomment-605488014
 
 
   
   ## CI report:
   
   * c5b4e3f0e6267ce5f6bbcd1c920516a2f0f420f9 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156142155) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6786)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zentol commented on issue #11551: [FLINK-16834] Add flink-clients dependency to all example modules

2020-03-28 Thread GitBox
zentol commented on issue #11551: [FLINK-16834] Add flink-clients dependency to 
all example modules 
URL: https://github.com/apache/flink/pull/11551#issuecomment-605488892
 
 
   we must add this dependency directly to the example; the scala dependency 
must not be in a parent pom module.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11555: [FLINK-16576][state backends] Correct the logic of KeyGroupStateHandle#getIntersection

2020-03-28 Thread GitBox
flinkbot commented on issue #11555: [FLINK-16576][state backends] Correct the 
logic of KeyGroupStateHandle#getIntersection
URL: https://github.com/apache/flink/pull/11555#issuecomment-605488014
 
 
   
   ## CI report:
   
   * c5b4e3f0e6267ce5f6bbcd1c920516a2f0f420f9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11555: [FLINK-16576][state backends] Correct the logic of KeyGroupStateHandle#getIntersection

2020-03-28 Thread GitBox
flinkbot commented on issue #11555: [FLINK-16576][state backends] Correct the 
logic of KeyGroupStateHandle#getIntersection
URL: https://github.com/apache/flink/pull/11555#issuecomment-605485414
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c5b4e3f0e6267ce5f6bbcd1c920516a2f0f420f9 (Sat Mar 28 
16:33:52 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16576) State inconsistency on restore with memory state backends

2020-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16576:
---
Labels: pull-request-available  (was: )

> State inconsistency on restore with memory state backends
> -
>
> Key: FLINK-16576
> URL: https://issues.apache.org/jira/browse/FLINK-16576
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Nico Kruber
>Assignee: Congxian Qiu(klion26)
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.3, 1.10.1, 1.11.0
>
>
> I occasionally see a few state inconsistencies with the {{TopSpeedWindowing}} 
> example in Flink. Restore would fail with either of these causes, but only 
> for the memory state backends and only with some combinations of parallelism 
> I took the savepoint with and parallelism I restore the job with:
> {code:java}
> java.lang.IllegalArgumentException: KeyGroupRange{startKeyGroup=64, 
> endKeyGroup=95} does not contain key group 97 {code}
> or
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.state.heap.HeapRestoreOperation.readKeyGroupStateData(HeapRestoreOperation.java:280)
>  {code}
> or
> {code:java}
> java.io.IOException: Corrupt stream, found tag: 8
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:217)
>  {code}
>  
> I managed to make it reproducible in a test that I quickly hacked together in 
> [https://github.com/NicoK/flink/blob/state.corruption.debug/flink-examples/flink-examples-streaming/src/test/java/org/apache/flink/streaming/test/examples/windowing/TopSpeedWindowingSavepointRestoreITCase.java]
>  (please checkout the whole repository since I had to change some 
> dependencies).
> In a bit more detail, this is what I discovered before, also with a manual 
> savepoint on S3:
> Savepoint that was taken with parallelism 2 (p=2) and shows the restore 
> failure in three different ways (all running in Flink 1.10.0; but I also see 
> it in Flink 1.9):
>  * first of all, if I try to restore with p=2, everything is fine
>  * if I restore with p=4 I get an exception like the one mentioned above:
> {code:java}
> 2020-03-11 15:53:35,149 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- 
> Window(GlobalWindows(), DeltaTrigger, TimeEvictor, ComparableAggregator, 
> PassThroughWindowFunction) -> Sink: Print to Std. Out (3/4) 
> (2ecdb03905cc8a376d43b086925452a6) switched from RUNNING to FAILED.
> java.lang.Exception: Exception while creating StreamOperatorStateContext.
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:191)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:255)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkException: Could not restore keyed 
> state backend for 
> EvictingWindowOperator_90bea66de1c231edf33913ecd54406c1_(3/4) from any of the 
> 1 provided restore options.
>   at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:304)
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:131)
>   ... 9 more
> Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed 
> when trying to restore heap backend
>   at 
> org.apache.flink.runtime.state.heap.HeapKeyedStateBackendBuilder.build(HeapKeyedStateBackendBuilder.java:116)
>   at 
> org.apache.flink.runtime.state.filesystem.FsStateBackend.createKeyedStateBackend(FsStateBackend.java:529)
>   at 
> 

[GitHub] [flink] klion26 opened a new pull request #11555: [FLINK-16576][state backends] Correct the logic of KeyGroupStateHandle#getIntersection

2020-03-28 Thread GitBox
klion26 opened a new pull request #11555: [FLINK-16576][state backends] Correct 
the logic of KeyGroupStateHandle#getIntersection
URL: https://github.com/apache/flink/pull/11555
 
 
   
   
   ## What is the purpose of the change
   
   This pr will return null if the handler's key-group does not have any 
intersection with the given key-group range.
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
   - `KeyGroupsStateHandleTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15981) Control the direct memory in FileChannelBoundedData.FileBufferReader

2020-03-28 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-15981:
-
Fix Version/s: (was: 1.10.1)
   (was: 1.11.0)

> Control the direct memory in FileChannelBoundedData.FileBufferReader
> 
>
> Key: FLINK-15981
> URL: https://issues.apache.org/jira/browse/FLINK-15981
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Jingsong Lee
>Priority: Critical
>
> Now, the default blocking BoundedData is FileChannelBoundedData. In its 
> reader, will create new direct buffer 64KB.
> When parallelism greater than 100, users need configure 
> "taskmanager.memory.task.off-heap.size" to avoid direct memory OOM. It is 
> hard to configure, and it cost a lot of memory. Consider 1000 parallelism, 
> maybe we need 1GB+ for a task manager.
> This is not conducive to the scenario of less slots and large parallelism. 
> Batch jobs could run little by little, but memory shortage would consume a 
> lot.
> If we provided N-Input operators, maybe things will be worse. This means the 
> number of subpartitions that can be requested at the same time will be more. 
> We have no idea how much memory.
> Here are my rough thoughts:
>  * Obtain memory from network buffers.
>  * provide "The maximum number of subpartitions that can be requested at the 
> same time".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11554: [FLINK-15101][connector/common] Add 
SourceCoordinator implementation.
URL: https://github.com/apache/flink/pull/11554#issuecomment-605459909
 
 
   
   ## CI report:
   
   * 7f2e408128e5d7929dd4301362a20d8d90c69597 Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/156136835) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6785)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16787) Provide an assigner strategy of average splits allocation

2020-03-28 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069927#comment-17069927
 ] 

Zhijiang commented on FLINK-16787:
--

I guess this feature would be involved in more components, from API to 
coordinator and then task stack in batch mode, maybe need a FLIP. And I do not 
think it can be done in release-1.11.  So I adjust the related labels above.

> Provide an assigner strategy of average splits allocation
> -
>
> Key: FLINK-16787
> URL: https://issues.apache.org/jira/browse/FLINK-16787
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core, Runtime / Coordination, Runtime / Task
>Reporter: Jingsong Lee
>Priority: Major
>
> For now InputSplitAssigner:
> Each task is to grab split, rather than average distribution, so once the 
> later tasks are not scheduled, the former tasks will grab all splits.
> We can provide an assigner strategy of average splits allocation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16787) Provide an assigner strategy of average splits allocation

2020-03-28 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-16787:
-
Component/s: Runtime / Coordination
 API / Core

> Provide an assigner strategy of average splits allocation
> -
>
> Key: FLINK-16787
> URL: https://issues.apache.org/jira/browse/FLINK-16787
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core, Runtime / Coordination, Runtime / Task
>Reporter: Jingsong Lee
>Priority: Major
>
> For now InputSplitAssigner:
> Each task is to grab split, rather than average distribution, so once the 
> later tasks are not scheduled, the former tasks will grab all splits.
> We can provide an assigner strategy of average splits allocation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai commented on issue #77: [FLINK-16730][docs] Python SDK Getting Started

2020-03-28 Thread GitBox
tzulitai commented on issue #77: [FLINK-16730][docs] Python SDK Getting Started
URL: https://github.com/apache/flink-statefun/pull/77#issuecomment-605461026
 
 
   Backporting to `release-2.0` to be included in RC3


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai closed pull request #78: [FLINK-16838] Change base image name and apply versioning

2020-03-28 Thread GitBox
tzulitai closed pull request #78: [FLINK-16838] Change base image name and 
apply versioning
URL: https://github.com/apache/flink-statefun/pull/78
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16787) Provide an assigner strategy of average splits allocation

2020-03-28 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-16787:
-
Fix Version/s: (was: 1.11.0)

> Provide an assigner strategy of average splits allocation
> -
>
> Key: FLINK-16787
> URL: https://issues.apache.org/jira/browse/FLINK-16787
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Task
>Reporter: Jingsong Lee
>Priority: Major
>
> For now InputSplitAssigner:
> Each task is to grab split, rather than average distribution, so once the 
> later tasks are not scheduled, the former tasks will grab all splits.
> We can provide an assigner strategy of average splits allocation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-03-28 Thread GitBox
flinkbot commented on issue #11554: [FLINK-15101][connector/common] Add 
SourceCoordinator implementation.
URL: https://github.com/apache/flink/pull/11554#issuecomment-605459909
 
 
   
   ## CI report:
   
   * 7f2e408128e5d7929dd4301362a20d8d90c69597 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16845) Implement SourceReaderOperator which runs the SourceReader.

2020-03-28 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned FLINK-16845:


Assignee: Jiangjie Qin

> Implement SourceReaderOperator which runs the SourceReader.
> ---
>
> Key: FLINK-16845
> URL: https://issues.apache.org/jira/browse/FLINK-16845
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>
> This ticket should implement the {{SourceReaderOperator}} which runs the 
> {{SourceReader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16845) Implement SourceReaderOperator which runs the SourceReader.

2020-03-28 Thread Jiangjie Qin (Jira)
Jiangjie Qin created FLINK-16845:


 Summary: Implement SourceReaderOperator which runs the 
SourceReader.
 Key: FLINK-16845
 URL: https://issues.apache.org/jira/browse/FLINK-16845
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Common
Reporter: Jiangjie Qin


This ticket should implement the {{SourceReaderOperator}} which runs the 
{{SourceReader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai commented on issue #78: [FLINK-16838] Change base image name and apply versioning

2020-03-28 Thread GitBox
tzulitai commented on issue #78: [FLINK-16838] Change base image name and apply 
versioning
URL: https://github.com/apache/flink-statefun/pull/78#issuecomment-605458402
 
 
   Thanks for the fast review Igal.
   
   Merging ...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15101) Add implementation for SourceCoordinator

2020-03-28 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069478#comment-17069478
 ] 

Jiangjie Qin commented on FLINK-15101:
--

[~trohrmann] Sorry for the belated reply. Yes, that makes a lot of sense. We 
were actually doing that. I just updated the ticket's scope.

> Add implementation for SourceCoordinator
> 
>
> Key: FLINK-15101
> URL: https://issues.apache.org/jira/browse/FLINK-15101
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket should implement the SourceCoordinator which runs the 
> SplitEnumerator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15101) Add implementation for SourceCoordinator

2020-03-28 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated FLINK-15101:
-
Summary: Add implementation for SourceCoordinator  (was: Add implementation 
for SourceCoordinator and integration with JobMaster)

> Add implementation for SourceCoordinator
> 
>
> Key: FLINK-15101
> URL: https://issues.apache.org/jira/browse/FLINK-15101
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket should implement the SourceCoordinator which runs the 
> SplitEnumerator. It also includes the integration with JobMaster, checkpoint 
> and recovery, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15101) Add implementation for SourceCoordinator

2020-03-28 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated FLINK-15101:
-
Description: This ticket should implement the SourceCoordinator which runs 
the SplitEnumerator.  (was: This ticket should implement the SourceCoordinator 
which runs the SplitEnumerator. It also includes the integration with 
JobMaster, checkpoint and recovery, etc.)

> Add implementation for SourceCoordinator
> 
>
> Key: FLINK-15101
> URL: https://issues.apache.org/jira/browse/FLINK-15101
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket should implement the SourceCoordinator which runs the 
> SplitEnumerator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15100) Add the interface and base implementation for SourceReader.

2020-03-28 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin closed FLINK-15100.

Resolution: Implemented

merged to master: 

4f6efedd31d0e7705a094725824ddec8940efb3b

> Add the interface and base implementation for SourceReader.
> ---
>
> Key: FLINK-15100
> URL: https://issues.apache.org/jira/browse/FLINK-15100
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add the interface and base implementation for SourceReader. Including 
> threading model, SplitReader / RecordEmitter. This ticket should also 
> integrate the SourceReader into the SourceReaderStreamTask.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-03-28 Thread GitBox
flinkbot commented on issue #11554: [FLINK-15101][connector/common] Add 
SourceCoordinator implementation.
URL: https://github.com/apache/flink/pull/11554#issuecomment-605457410
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7f2e408128e5d7929dd4301362a20d8d90c69597 (Sat Mar 28 
14:50:34 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] becketqin commented on issue #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-03-28 Thread GitBox
becketqin commented on issue #11554: [FLINK-15101][connector/common] Add 
SourceCoordinator implementation.
URL: https://github.com/apache/flink/pull/11554#issuecomment-605457279
 
 
   @StephanEwen This is the patch for SourceCoordinator implementation. Do you 
have time to take a look? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15101) Add implementation for SourceCoordinator and integration with JobMaster

2020-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15101:
---
Labels: pull-request-available  (was: )

> Add implementation for SourceCoordinator and integration with JobMaster
> ---
>
> Key: FLINK-15101
> URL: https://issues.apache.org/jira/browse/FLINK-15101
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>
> This ticket should implement the SourceCoordinator which runs the 
> SplitEnumerator. It also includes the integration with JobMaster, checkpoint 
> and recovery, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin opened a new pull request #11554: [FLINK-15101][connector/common] Add SourceCoordinator implementation.

2020-03-28 Thread GitBox
becketqin opened a new pull request #11554: [FLINK-15101][connector/common] Add 
SourceCoordinator implementation.
URL: https://github.com/apache/flink/pull/11554
 
 
   ## What is the purpose of the change
   This patch is a part of 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface).
 It adds the implementation for `SourceCoordinator` which extends 
`OperatorCoordinator`.
   
   ## Brief change log
   The following major classes are added:
   * SourceCoordinator
   * SourceCoordinatorContext
   * SourceCoordinatorProvider
   * SplitAssignmentTracker
   
   ## Verifying this change
   This change added related unit tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (yes)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (JavaDocs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-16262) Class loader problem with FlinkKafkaProducer.Semantic.EXACTLY_ONCE and usrlib directory

2020-03-28 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang resolved FLINK-16262.
--
Resolution: Fixed

Merged in release-1.10: e39cfe7660daaeed4213f04ccbce6de1e8d90fe5

Merged in master: ff0d0c979d7cf67648ecf91850e782e99d557240

> Class loader problem with FlinkKafkaProducer.Semantic.EXACTLY_ONCE and usrlib 
> directory
> ---
>
> Key: FLINK-16262
> URL: https://issues.apache.org/jira/browse/FLINK-16262
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
> Environment: openjdk:11-jre with a slightly modified Flink 1.10.0 
> build (nothing changed regarding Kafka and/or class loading).
>Reporter: Jürgen Kreileder
>Assignee: Guowei Ma
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We're using Docker images modeled after 
> [https://github.com/apache/flink/blob/master/flink-container/docker/Dockerfile]
>  (using Java 11)
> When I try to switch a Kafka producer from AT_LEAST_ONCE to EXACTLY_ONCE, the 
> taskmanager startup fails with:
> {code:java}
> 2020-02-24 18:25:16.389 INFO  o.a.f.r.t.Task                           Create 
> Case Fixer -> Sink: Findings local-krei04-kba-digitalweb-uc1 (1/1) 
> (72f7764c6f6c614e5355562ed3d27209) switched from RUNNING to FAILED.
> org.apache.kafka.common.config.ConfigException: Invalid value 
> org.apache.kafka.common.serialization.ByteArraySerializer for configuration 
> key.serializer: Class 
> org.apache.kafka.common.serialization.ByteArraySerializer could not be found.
>  at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:718)
>  at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:471)
>  at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:464)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
>  at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75)
>  at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:396)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:326)
>  at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:298)
>  at 
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer.(FlinkKafkaInternalProducer.java:76)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.lambda$abortTransactions$2(FlinkKafkaProducer.java:1107)
>  at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(Unknown 
> Source)
>  at java.base/java.util.HashMap$KeySpliterator.forEachRemaining(Unknown 
> Source)
>  at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
>  at java.base/java.util.stream.ForEachOps$ForEachTask.compute(Unknown Source)
>  at java.base/java.util.concurrent.CountedCompleter.exec(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
>  at 
> java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown 
> Source)
>  at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
>  at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown 
> Source){code}
> This looks like a class loading issue: If I copy our JAR to FLINK_LIB_DIR 
> instead of FLINK_USR_LIB_DIR, everything works fine.
> (AT_LEAST_ONCE producers works fine with the JAR in FLINK_USR_LIB_DIR)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-28 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069458#comment-17069458
 ] 

Zhijiang commented on FLINK-16821:
--

Thanks for solving it [~rmetzger]!

I guess it is already needed for release-1.10?  Another instance found in 
release-1.10 : 
[https://travis-ci.org/github/apache/flink/builds/667815122?utm_medium=notification_source=slack]

> Run Kubernetes test failed with invalid named "minikube"
> 
>
> Key: FLINK-16821
> URL: https://issues.apache.org/jira/browse/FLINK-16821
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Zhijiang
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is the test run 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> Log output
> {code:java}
> 2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
> 2020-03-27T00:07:38.956Z 
> ==
> 2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
> 2020-03-27T00:07:41.7529865Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.7721475Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-27T00:07:42.5359301Z Installing minikube ...
> 2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
> TimeTime Time  Current
> 2020-03-27T00:07:42.5494729Z  Dload  Upload   
> Total   SpentLeft  Speed
> 2020-03-27T00:07:42.5498136Z 
> 2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
> --:--:-- --:--:-- --:--:-- 65.2M
> 2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.4282404Z Starting minikube ...
> 2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:43.7761742Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
> 2020-03-27T00:07:48.9031515Z Starting minikube ...
> 2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:49.0616688Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.1921479Z Starting minikube ...
> 2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:54.3395499Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4203902Z * There is no local cluster 

[GitHub] [flink] zhijiangW merged pull request #11497: [FLINK-16262][Connectors] Set the context classloader for parallel stream in FlinkKafkaProducer

2020-03-28 Thread GitBox
zhijiangW merged pull request #11497: [FLINK-16262][Connectors] Set the context 
classloader for parallel stream in FlinkKafkaProducer
URL: https://github.com/apache/flink/pull/11497
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW merged pull request #11247: [FLINK-16262][Connectors] Set the context classloader for parallel stream in FlinkKafkaProducer

2020-03-28 Thread GitBox
zhijiangW merged pull request #11247: [FLINK-16262][Connectors] Set the context 
classloader for parallel stream in FlinkKafkaProducer
URL: https://github.com/apache/flink/pull/11247
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10114) Support Orc for StreamingFileSink

2020-03-28 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069437#comment-17069437
 ] 

Yun Gao commented on FLINK-10114:
-

[~zenfenan] Very thanks for the doc and PR, I will also take a look :)

> Support Orc for StreamingFileSink
> -
>
> Key: FLINK-10114
> URL: https://issues.apache.org/jira/browse/FLINK-10114
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: zhangminglei
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch deduplication lack state TTL bug fix

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch 
deduplication lack state TTL bug fix
URL: https://github.com/apache/flink/pull/11482#issuecomment-60726
 
 
   
   ## CI report:
   
   * 54a34b1dcbcaff049bed0cd11033711e44c38ccf Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/156127302) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6783)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16576) State inconsistency on restore with memory state backends

2020-03-28 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069428#comment-17069428
 ] 

Congxian Qiu(klion26) commented on FLINK-16576:
---

The reason why the restore failed here because of that {{the *mapping of 
stateId and metaInfo is wrong*}}.

The mapping is wrong because we registered some metaInfos that do not belong to 
current subtask. 
{code:java}
// HeapRestoreOperation#restore
createOrCheckStateForMetaInfo(restoredMetaInfos, kvStatesById); // will 
register the metainfo

readStateHandleStateData(
   fsDataInputStream,
   inView,
   keyGroupsStateHandle.getGroupRangeOffsets(),
   kvStatesById, restoredMetaInfos.size(),
   serializationProxy.getReadVersion(),
   serializationProxy.isUsingKeyGroupCompression());


private void createOrCheckStateForMetaInfo(
   List restoredMetaInfo,
   Map kvStatesById) {

   for (StateMetaInfoSnapshot metaInfoSnapshot : restoredMetaInfo) {
  final StateSnapshotRestore registeredState;

  ..

  if (registeredState == null) {
 kvStatesById.put(kvStatesById.size(), metaInfoSnapshot); // 
constructing the mapping between stateId and metaInfo, even if the current 
statehandle does not belong to the current subtask
  }
   }
}
{code}
from the code above we can see, we'll always register the metainfo even if the 
current state handle does not belong to ourselves(the KeyGroupStateHandle will 
contain metaInfo, EMPTY_KEYGROUP, empty offsets and the stateHandle data). 
after the registered the wrong metainfo, then the *{{mapping of stateId and 
metaInfo becomes wrong(when constructing the mapping, we assume that all the 
handles belong to the current subtask).}}* {{(RocksDBStateBackend does not 
construct such mapping, so would not encounter such error).}}

{{For the solution here, I want to filter out the stateHandles out when 
assigning state to subtask in }}{{StateAssignmentOperation}}.{{ }}

> State inconsistency on restore with memory state backends
> -
>
> Key: FLINK-16576
> URL: https://issues.apache.org/jira/browse/FLINK-16576
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Nico Kruber
>Assignee: Congxian Qiu(klion26)
>Priority: Blocker
> Fix For: 1.9.3, 1.10.1, 1.11.0
>
>
> I occasionally see a few state inconsistencies with the {{TopSpeedWindowing}} 
> example in Flink. Restore would fail with either of these causes, but only 
> for the memory state backends and only with some combinations of parallelism 
> I took the savepoint with and parallelism I restore the job with:
> {code:java}
> java.lang.IllegalArgumentException: KeyGroupRange{startKeyGroup=64, 
> endKeyGroup=95} does not contain key group 97 {code}
> or
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.state.heap.HeapRestoreOperation.readKeyGroupStateData(HeapRestoreOperation.java:280)
>  {code}
> or
> {code:java}
> java.io.IOException: Corrupt stream, found tag: 8
>   at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:217)
>  {code}
>  
> I managed to make it reproducible in a test that I quickly hacked together in 
> [https://github.com/NicoK/flink/blob/state.corruption.debug/flink-examples/flink-examples-streaming/src/test/java/org/apache/flink/streaming/test/examples/windowing/TopSpeedWindowingSavepointRestoreITCase.java]
>  (please checkout the whole repository since I had to change some 
> dependencies).
> In a bit more detail, this is what I discovered before, also with a manual 
> savepoint on S3:
> Savepoint that was taken with parallelism 2 (p=2) and shows the restore 
> failure in three different ways (all running in Flink 1.10.0; but I also see 
> it in Flink 1.9):
>  * first of all, if I try to restore with p=2, everything is fine
>  * if I restore with p=4 I get an exception like the one mentioned above:
> {code:java}
> 2020-03-11 15:53:35,149 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- 
> Window(GlobalWindows(), DeltaTrigger, TimeEvictor, ComparableAggregator, 
> PassThroughWindowFunction) -> Sink: Print to Std. Out (3/4) 
> (2ecdb03905cc8a376d43b086925452a6) switched from RUNNING to FAILED.
> java.lang.Exception: Exception while creating StreamOperatorStateContext.
>   at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:191)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:255)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1006)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch deduplication lack state TTL bug fix

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch 
deduplication lack state TTL bug fix
URL: https://github.com/apache/flink/pull/11482#issuecomment-60726
 
 
   
   ## CI report:
   
   * 95c74ce398cdc779ac895edb7cf3433579f1 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/154529184) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6499)
 
   * 54a34b1dcbcaff049bed0cd11033711e44c38ccf Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156127302) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6783)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch deduplication lack state TTL bug fix

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #11482: [FLINK-16581][table] Minibatch 
deduplication lack state TTL bug fix
URL: https://github.com/apache/flink/pull/11482#issuecomment-60726
 
 
   
   ## CI report:
   
   * 95c74ce398cdc779ac895edb7cf3433579f1 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/154529184) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6499)
 
   * 54a34b1dcbcaff049bed0cd11033711e44c38ccf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] mxm commented on issue #11473: [FLINK-16705] Ensure MiniCluster shutdown does not interfere with JobResult retrieval

2020-03-28 Thread GitBox
mxm commented on issue #11473: [FLINK-16705] Ensure MiniCluster shutdown does 
not interfere with JobResult retrieval
URL: https://github.com/apache/flink/pull/11473#issuecomment-605432413
 
 
   @flinkbot run azure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] mxm commented on issue #11473: [FLINK-16705] Ensure MiniCluster shutdown does not interfere with JobResult retrieval

2020-03-28 Thread GitBox
mxm commented on issue #11473: [FLINK-16705] Ensure MiniCluster shutdown does 
not interfere with JobResult retrieval
URL: https://github.com/apache/flink/pull/11473#issuecomment-605432479
 
 
   Travis passes, though the Flink bot doesn't seem to recognize that: 
https://travis-ci.com/github/flink-ci/flink/builds/155934908


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lsyldliu commented on issue #11482: [FLINK-16581][table] Minibatch deduplication lack state TTL bug fix

2020-03-28 Thread GitBox
lsyldliu commented on issue #11482: [FLINK-16581][table] Minibatch 
deduplication lack state TTL bug fix
URL: https://github.com/apache/flink/pull/11482#issuecomment-605432227
 
 
   @wuchong CC


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * f2415e335c546fff15d8f432e673da8ef0eb72ae Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/156122865) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6782)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16844) Support retract message for StreamExecGroupWindowAggregate in blink planner

2020-03-28 Thread Benchao Li (Jira)
Benchao Li created FLINK-16844:
--

 Summary: Support retract message for 
StreamExecGroupWindowAggregate in blink planner
 Key: FLINK-16844
 URL: https://issues.apache.org/jira/browse/FLINK-16844
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Benchao Li


Currently the `WindowOperator` in blink planner actually supports retract 
messages, however in StreamExecGroupWindowAggregate, we throws exception if 
input is retract. IMO, we can remove this check and let 
`StreamExecGroupWindowAggregate` support retract messages. This will enhance 
the ability of blink planner very much.

 

cc [~jark]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16411) Maven central connection timeouts on Azure Pipelines

2020-03-28 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069378#comment-17069378
 ] 

Piotr Nowojski commented on FLINK-16411:


Can we run our own mvn cache on the AliCloud or sth like that?

> Maven central connection timeouts on Azure Pipelines
> 
>
> Key: FLINK-16411
> URL: https://issues.apache.org/jira/browse/FLINK-16411
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Some test stages invoke maven again, where additional dependencies are 
> downloaded, sometimes failing the build.
> This ticket is about using the Google mirror wherever possible.
> Examples of failing tests:
> - 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5882=logs=636f54dd-dda5-5b4b-f495-2d92ec493b6c=6c30efdf-a92a-5da3-9a6a-004c8552b2df
> A failure looks like this:
> {code}
> [ERROR] Failed to execute goal on project flink-hadoop-fs: Could not resolve 
> dependencies for project org.apache.flink:flink-hadoop-fs:jar:1.11-SNAPSHOT: 
> Could not transfer artifact 
> org.apache.flink:flink-shaded-hadoop-2:jar:2.8.3-10.0 from/to central 
> (https://repo.maven.apache.org/maven2): GET request of: 
> org/apache/flink/flink-shaded-hadoop-2/2.8.3-10.0/flink-shaded-hadoop-2-2.8.3-10.0.jar
>  from central failed: Connection reset -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * f2415e335c546fff15d8f432e673da8ef0eb72ae Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156122865) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6782)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16843) Python SDK distribution is missing LICENSE and NOTICE files

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-16843:
---

 Summary: Python SDK distribution is missing LICENSE and NOTICE 
files
 Key: FLINK-16843
 URL: https://issues.apache.org/jira/browse/FLINK-16843
 Project: Flink
  Issue Type: Bug
  Components: Stateful Functions
Affects Versions: statefun-2.0
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: statefun-2.0


The Python SDK distributions for Stateful Functions do not bundle any LICENSE 
or NOTICE files.

This should be fixed, as these are required to be included in all ASF-released 
distributions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16835) Replace TableConfig with Configuration

2020-03-28 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069370#comment-17069370
 ] 

Jingsong Lee commented on FLINK-16835:
--

+1 to the replacement. [~jark] I think it is time to refactor these methods to 
ConfigOptions.

> Replace TableConfig with Configuration
> --
>
> Key: FLINK-16835
> URL: https://issues.apache.org/jira/browse/FLINK-16835
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Major
>
> In order to allow reading and writing of configuration from a file or 
> string-based properties. We should consider removing {{TableConfig}} and 
> fully rely on a Configuration-based object with {{ConfigOptions}}.
> This effort was partially already started which is why 
> {{TableConfig.getConfiguration}} exists.
> However, we should clarify if we would like to have control and traceability 
> over layered configurations such as {{flink-conf,yaml < 
> StreamExecutionEnvironment < TableEnvironment < Query}}. Maybe the 
> {{Configuration}} class is not the right abstraction for this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16626) Exception encountered when cancelling a job in yarn per-job mode

2020-03-28 Thread tartarus (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069357#comment-17069357
 ] 

tartarus commented on FLINK-16626:
--

This is  JM logs,but It's a surface phenomenon.
{code:java}
2020-03-22 17:16:14,770 ERROR 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.rejectedExecution[flink-akka.actor.default-dispatcher-20]
  - Failed to submit a listener notification task. Event loop shut down?
java.util.concurrent.RejectedExecutionException: event executor terminated
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:855)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:340)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:333)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:766)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.safeExecute(DefaultPromise.java:764)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:421)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:149)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:95)
at 
org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:30)
at 
org.apache.flink.runtime.rest.handler.util.HandlerUtils.sendResponse(HandlerUtils.java:224)
at 
org.apache.flink.runtime.rest.handler.util.HandlerUtils.sendResponse(HandlerUtils.java:176)
at 
org.apache.flink.runtime.rest.handler.util.HandlerUtils.sendResponse(HandlerUtils.java:91)
at 
org.apache.flink.runtime.rest.handler.AbstractRestHandler.lambda$respondToRequest$0(AbstractRestHandler.java:78)
at 
java.util.concurrent.CompletableFuture.uniAccept(CompletableFuture.java:656)
at 
java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:632)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
at 
org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:874)
at akka.dispatch.OnComplete.internal(Future.scala:264)
at akka.dispatch.OnComplete.internal(Future.scala:261)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at 
org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
at 
akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at 
akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at 
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{code}


> Exception encountered when cancelling a job in yarn per-job mode
> 

[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * 2c3fb6f86e182998f2207926cddb7b1b3e9dd44d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149930373) 
   * f2415e335c546fff15d8f432e673da8ef0eb72ae Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/156122865) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6782)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-03-28 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * 2c3fb6f86e182998f2207926cddb7b1b3e9dd44d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149930373) 
   * f2415e335c546fff15d8f432e673da8ef0eb72ae UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16842) Ridesharing example simulator built artifact is missing NOTICE / LICENSE for bundled dependencies

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-16842:
---

 Summary: Ridesharing example simulator built artifact is missing 
NOTICE / LICENSE for bundled dependencies
 Key: FLINK-16842
 URL: https://issues.apache.org/jira/browse/FLINK-16842
 Project: Flink
  Issue Type: Bug
  Components: Stateful Functions
Affects Versions: statefun-2.0
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: statefun-2.0


The {{statefun-ridesharing-example-simulator}} artifact bundles {{spring-boot}} 
as a dependency, which in turn pulls in some other dependencies that are 
non-ASLv2.

We should add NOTICE / LICENSE files to the built artifact for those.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16841) Stateful Function artifacts jars should not bundle proto sources

2020-03-28 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-16841:
---

 Summary: Stateful Function artifacts jars should not bundle proto 
sources
 Key: FLINK-16841
 URL: https://issues.apache.org/jira/browse/FLINK-16841
 Project: Flink
  Issue Type: Bug
  Components: Build System, Stateful Functions
Affects Versions: statefun-2.0
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: statefun-2.0


These protobuf definition files are being bundled in built artifacts:
{code}
google/protobuf/any.proto
google/protobuf/api.proto
google/protobuf/descriptor.proto
google/protobuf/duration.proto
google/protobuf/empty.proto
google/protobuf/field_mask.proto
google/protobuf/source_context.proto
google/protobuf/struct.proto
google/protobuf/timestamp.proto
google/protobuf/type.proto
google/protobuf/wrappers.proto
{code}

This is caused by the {{addProtoSources}} configuration of the 
{{protoc-jar-maven-plugin}}.

We should remove those, because:
- Bundling those will require licensing acknowledgement to Protobuf in our 
artifacts.
- Those definition files are not used directly by Stateful Functions at all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11499) Extend StreamingFileSink BulkFormats to support arbitrary roll policies

2020-03-28 Thread Sivaprasanna Sethuraman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069306#comment-17069306
 ] 

Sivaprasanna Sethuraman commented on FLINK-11499:
-

[~pnowojski]

Regarding point #1 in the failure/recovery scenario, are you implying that the 
main stream along with the roll up that happens, say, every hour, it also has 
to be rolled i.e. published on every checkpoint? Then are we not back to square 
one? That we end up with very few records in the rolled file? Please correct me 
if I'm wrong with the understanding of your point.

And I second [~kkl0u]' point that it may add up to latency and slow recovery 
times since we are going to reread the committed WAL files and reach out to the 
WIP WAL stream and then resume processing.

However, I think the storage bandwidth can still be managed and it won't be a 
big issue since most of the object stores are pretty cheap and if we guarantee 
that upon rolling up the main stream, we clear up the committed WAL files which 
fall under the same period of time which has been rolled up, it will not cause 
any issue in terms of storage.

> Extend StreamingFileSink BulkFormats to support arbitrary roll policies
> ---
>
> Key: FLINK-11499
> URL: https://issues.apache.org/jira/browse/FLINK-11499
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Seth Wiesman
>Priority: Major
>  Labels: usability
> Fix For: 1.11.0
>
>
> Currently when using the StreamingFilleSink Bulk-encoding formats can only be 
> combined with the `OnCheckpointRollingPolicy`, which rolls the in-progress 
> part file on every checkpoint.
> However, many bulk formats such as parquet are most efficient when written as 
> large files; this is not possible when frequent checkpointing is enabled. 
> Currently the only work-around is to have long checkpoint intervals which is 
> not ideal.
>  
> The StreamingFileSink should be enhanced to support arbitrary roll policy's 
> so users may write large bulk files while retaining frequent checkpoints.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] igalshilman removed a comment on issue #78: [FLINK-16838] Change base image name and apply versioning

2020-03-28 Thread GitBox
igalshilman removed a comment on issue #78: [FLINK-16838] Change base image 
name and apply versioning
URL: https://github.com/apache/flink-statefun/pull/78#issuecomment-605412130
 
 
   Thanks!
   This looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] igalshilman commented on issue #78: [FLINK-16838] Change base image name and apply versioning

2020-03-28 Thread GitBox
igalshilman commented on issue #78: [FLINK-16838] Change base image name and 
apply versioning
URL: https://github.com/apache/flink-statefun/pull/78#issuecomment-605412130
 
 
   Thanks!
   This looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #78: [FLINK-16838] Change base image name and apply versioning

2020-03-28 Thread GitBox
tzulitai commented on a change in pull request #78: [FLINK-16838] Change base 
image name and apply versioning
URL: https://github.com/apache/flink-statefun/pull/78#discussion_r399633529
 
 

 ##
 File path: 
statefun-e2e-tests/statefun-routable-kafka-e2e/src/test/resources/Dockerfile
 ##
 @@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM statefun
+FROM flink-statefun:2.1-SNAPSHOT
 
 Review comment:
   The updated `update_branch_version.sh` script takes care of updating these 
strings automatically.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16840) PostgresCatalogTest fails waiting for server

2020-03-28 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-16840:
---
Labels: test-stability  (was: )

> PostgresCatalogTest fails waiting for server
> 
>
> Key: FLINK-16840
> URL: https://issues.apache.org/jira/browse/FLINK-16840
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6777=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=d26b3528-38b0-53d2-05f7-37557c2405e4
> {code}
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 
> s - in org.apache.flink.table.descriptors.JDBCCatalogDescriptorTest
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 12.159 s <<< FAILURE! - in 
> org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogTest
> [ERROR] org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogTest  Time 
> elapsed: 12.159 s  <<< ERROR!
> java.io.IOException: Gave up waiting for server to start after 1ms
> Caused by: java.sql.SQLException: connect failed
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16840) PostgresCatalogTest fails waiting for server

2020-03-28 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-16840:
--

 Summary: PostgresCatalogTest fails waiting for server
 Key: FLINK-16840
 URL: https://issues.apache.org/jira/browse/FLINK-16840
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.11.0
Reporter: Robert Metzger


CI: 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6777=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=d26b3528-38b0-53d2-05f7-37557c2405e4
{code}
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 s 
- in org.apache.flink.table.descriptors.JDBCCatalogDescriptorTest
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 12.159 
s <<< FAILURE! - in 
org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogTest
[ERROR] org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogTest  Time 
elapsed: 12.159 s  <<< ERROR!
java.io.IOException: Gave up waiting for server to start after 1ms
Caused by: java.sql.SQLException: connect failed
Caused by: java.net.ConnectException: Connection refused (Connection refused)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #78: [FLINK-16838] Change base image name and apply versioning

2020-03-28 Thread GitBox
tzulitai commented on a change in pull request #78: [FLINK-16838] Change base 
image name and apply versioning
URL: https://github.com/apache/flink-statefun/pull/78#discussion_r399633194
 
 

 ##
 File path: 
statefun-e2e-tests/statefun-routable-kafka-e2e/src/test/resources/Dockerfile
 ##
 @@ -13,7 +13,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FROM statefun
+FROM flink-statefun:2.1-SNAPSHOT
 
 Review comment:
   Note:
   these will be changed to `FROM flink-statefun:2.0-SNAPSHOT` when backporting 
these changes to the snapshot `release-2.0` branch.
   
   On release candidates / releases, these will be e.g. `FROM 
flink-statefun:2.0.0`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai opened a new pull request #78: [FLINK-16838] Change base image name and apply versioning

2020-03-28 Thread GitBox
tzulitai opened a new pull request #78: [FLINK-16838] Change base image name 
and apply versioning
URL: https://github.com/apache/flink-statefun/pull/78
 
 
   This PR has the following end-goal in mind:
   - Examples / E2E tests / quickstart archetype built or run from snapshot 
versions (i.e. from `master` or `release-2.0` branches) should always run 
against a locally-built StateFun base image, built from the source of said 
snapshot version.
   - Released examples / E2E tests / quickstart archetypes, should not require 
users to locally build a StateFun image, assuming that we will have official 
images published to Docker Hub.
   
   This PR does a few things to accomplish that:
   
   1. Let the image build script `tools/docker/build-distribution.sh` build 
images tagged with the current source version. i.e.,
   `2.1-SNAPSHOT` when built from the snapshot `master` branch, `2.0-SNAPSHOT` 
when built from the snapshot `release-2.0` branch, or
   simply `2.0.0` if built from a officially released source distribution
   
   2. Update Dockerfiles of all examples / E2E tests / quickstart archetype to 
use a specific version tag. This has the effect that, if those were built from 
a released distribution, the image may be pulled directly from Docker Hub (when 
we publish the images after the release). Otherwise, if those were built from a 
snapshot version, they would use a locally-built snapshot image.
   
   On the side, this PR also uses the opportunity to rename the image name from 
`statefun` to `flink-statefun`, with the following reasoning:
   - It would be more consistent with the naming convention of other 
distributions, like the Python SDK
   - Lets the image name to show association with Flink
   
   ---
   
   ## Verifying
   
   I verified the changes by:
   - Running all the examples that used images built from StateFun base image, 
and all ran without problems
   - Run all end-to-end tests, using `mvn clean verify -Prun-e2e-tests`
   - Created new project from quickstart archetype - the generated skeleton 
project has correct Dockerfiles


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16838) Stateful Functions Quickstart archetype Dockerfile should reference a specific version tag

2020-03-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16838:
---
Labels: pull-request-available  (was: )

> Stateful Functions Quickstart archetype Dockerfile should reference a 
> specific version tag
> --
>
> Key: FLINK-16838
> URL: https://issues.apache.org/jira/browse/FLINK-16838
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Critical
>  Labels: pull-request-available
>
> Currently, the quickstart archetype provides a skeleton Dockerfile that 
> always builds on top of the latest image:
> {code}
> FROM statefun
> {code}
> While it happens to work for the first ever release since the {{latest}} tag 
> will (coincidentally) point to the correct version,
> once we have multiple releases this will no longer be correct.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] carp84 commented on issue #11491: [FLINK-16513][checkpointing] Unaligned checkpoints: checkpoint metadata

2020-03-28 Thread GitBox
carp84 commented on issue #11491: [FLINK-16513][checkpointing] Unaligned 
checkpoints: checkpoint metadata
URL: https://github.com/apache/flink/pull/11491#issuecomment-605403180
 
 
   Thanks for the ping @pnowojski @rkhachatryan , and sorry for the late 
response. I've noticed this one and asked @Myasuka and @klion26 to take a look. 
I myself am also checking the changes but with slow progress, will drop some 
comments once ready.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services