[jira] [Commented] (FLINK-8914) CEP's greedy() modifier doesn't work

2018-03-12 Thread aitozi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396500#comment-16396500
 ] 

aitozi commented on FLINK-8914:
---

Yes, i run into this bug too, greedy only works when it has a ending flag. 

> CEP's greedy() modifier doesn't work
> 
>
> Key: FLINK-8914
> URL: https://issues.apache.org/jira/browse/FLINK-8914
> Project: Flink
>  Issue Type: Bug
>  Components: CEP
>Affects Versions: 1.4.0, 1.4.1
>Reporter: David Anderson
>Priority: Major
>
> When applied to the first or last component of a CEP Pattern, greedy() 
> doesn't work correctly. Here's an example:
> {code:java}
> package com.dataartisans.flinktraining.exercises.datastream_java.cep;
> import org.apache.flink.cep.CEP;
> import org.apache.flink.cep.PatternSelectFunction;
> import org.apache.flink.cep.PatternStream;
> import org.apache.flink.cep.pattern.Pattern;
> import org.apache.flink.cep.pattern.conditions.SimpleCondition;
> import org.apache.flink.streaming.api.datastream.DataStream;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import java.util.List;
> import java.util.Map;
> public class RunLength {
>   public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> DataStream input = env.fromElements(1, 1, 1, 1, 1, 0, 1, 1, 1, 
> 0);
> Pattern onesThenZero = Pattern.begin("ones")
>   .where(new SimpleCondition() {
> @Override
> public boolean filter(Integer value) throws Exception {
>   return value == 1;
> }
>   })
>   .oneOrMore()
>   .greedy()
>   .consecutive()
>   .next("zero")
>   .where(new SimpleCondition() {
> @Override
> public boolean filter(Integer value) throws Exception {
>   return value == 0;
> }
>   });
>   PatternStream patternStream = CEP.pattern(input, onesThenZero);
>   // Expected: 5 3
>   // Actual: 5 4 3 2 1 3 2 1
>   patternStream.select(new LengthOfRun()).print();
>   env.execute();
> }
> public static class LengthOfRun implements PatternSelectFunction Integer> {
>   public Integer select(Map pattern) {
>   return pattern.get("ones").size();
> }
>   }
> }
> {code}
> The only workaround for now seems to be to rewrite the pattern so that 
> greedy() isn't needed – i.e. by bracketing the greedy section with a prefix 
> and suffix that both have to be matched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8919) Add KeyedProcessFunctionWIthCleanupState

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396491#comment-16396491
 ] 

ASF GitHub Bot commented on FLINK-8919:
---

Github user liurenjie1024 commented on the issue:

https://github.com/apache/flink/pull/5680
  
@bowenli86 This is a trivial change and most the code is copied from the 
non keyed counterpart, so I don't think we need a test.


> Add KeyedProcessFunctionWIthCleanupState
> 
>
> Key: FLINK-8919
> URL: https://issues.apache.org/jira/browse/FLINK-8919
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.6.0
>Reporter: Renjie Liu
>Assignee: Renjie Liu
>Priority: Minor
> Fix For: 1.6.0
>
>
> ProcessFunctionWithCleanupState is a useful tool and I think we also need one 
> for the new KeyedProcessFunction api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5680: [FLINK-8919] Add KeyedProcessFunctionWithCleanupState.

2018-03-12 Thread liurenjie1024
Github user liurenjie1024 commented on the issue:

https://github.com/apache/flink/pull/5680
  
@bowenli86 This is a trivial change and most the code is copied from the 
non keyed counterpart, so I don't think we need a test.


---


[jira] [Created] (FLINK-8933) Avoid calling Class#newInstance

2018-03-12 Thread Ted Yu (JIRA)
Ted Yu created FLINK-8933:
-

 Summary: Avoid calling Class#newInstance
 Key: FLINK-8933
 URL: https://issues.apache.org/jira/browse/FLINK-8933
 Project: Flink
  Issue Type: Task
Reporter: Ted Yu


Class#newInstance is deprecated starting in Java 9 - 
https://bugs.openjdk.java.net/browse/JDK-6850612 - because it may throw 
undeclared checked exceptions.

The suggested replacement is getDeclaredConstructor().newInstance(), which 
wraps the checked exceptions in InvocationException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8623) ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics unstable on Travis

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396383#comment-16396383
 ] 

ASF GitHub Bot commented on FLINK-8623:
---

Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5449
  
Hi, @NicoK . I think ```InetAddress.getAllByName("localhost")``` wont work 
since we still give the specific hostname for that. And it will return a  
loopback address.


> ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics unstable on 
> Travis
> 
>
> Key: FLINK-8623
> URL: https://issues.apache.org/jira/browse/FLINK-8623
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Till Rohrmann
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.5.0, 1.4.3
>
>
> {{ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics}} fails on 
> Travis: https://travis-ci.org/apache/flink/jobs/33932



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5449: [FLINK-8623] ConnectionUtilsTest.testReturnLocalHostAddre...

2018-03-12 Thread zhangminglei
Github user zhangminglei commented on the issue:

https://github.com/apache/flink/pull/5449
  
Hi, @NicoK . I think ```InetAddress.getAllByName("localhost")``` wont work 
since we still give the specific hostname for that. And it will return a  
loopback address.


---


[jira] [Commented] (FLINK-8917) FlinkMiniCluster default createHighAvailabilityServices is not same as ClusterClient

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396373#comment-16396373
 ] 

ASF GitHub Bot commented on FLINK-8917:
---

GitHub user jianran opened a pull request:

https://github.com/apache/flink/pull/5686

[FLINK-8917] [Job-Submission] FlinkMiniCluster default 
createHighAvailabilityServices is not same as ClusterClient


## What is the purpose of the change

the FlinkMiniCluster used
HighAvailabilityServicesUtils.createAvailableOrEmbeddedServices to create 
highAvailabilityServices,so the FlinkMiniCluster's highAvailabilityServices is 
EmbeddedHaServices, but the ClusterClient used
HighAvailabilityServicesUtils.createHighAvailabilityServices,so the so 
ClusterClient's highAvailabilityServices is StandaloneHaServices; the  
highAvailabilityServicess are different, if you use the flink-1.4 in 
zeppelin,the zeppelin use FlinkMiniCluster to submit job, the job submission 
will be failed with the follow msg:

Discard message 
LeaderSessionMessage(----,SubmitJob(JobGraph(jobId:
 33d8e7d74aa48f76a1622d4d8f78105e),EXECUTION_RESULT_AND_STATE_CHANGES)) because 
the expected leader session ID 87efb7ca-b761-4977-9696-d521bc178703 did not 
equal the received leader session ID ----.

so this pull request will to change the FlinkMiniCluster use 
HighAvailabilityServicesUtils.createHighAvailabilityServices to create 
StandaloneHaServices as same as the ClusterClient created
## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change


This change is a trivial rework / code cleanup without any test coverage.

This change is already covered by existing tests, such as 
LocalFlinkMiniClusterITCase.

## Does this pull request potentially affect one of the following parts:

   - Dependencies (does it add or upgrade a dependency):no 
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
  - The serializers:  don't know
  - The runtime per-record code paths (performance sensitive):  don't know
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  - The S3 file system connector:no

## Documentation
  - Does this pull request introduce a new feature? no

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jianran/flink release-1.4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5686.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5686


commit 4aa4e8d27f602f7dfeadc07c5b76498afb044f06
Author: jianran.tfh 
Date:   2018-03-13T01:08:19Z

[FLINK-8917] [Job-Submission] FlinkMiniCluster haService not same as 
ClusterClient




> FlinkMiniCluster default createHighAvailabilityServices is not same as 
> ClusterClient
> 
>
> Key: FLINK-8917
> URL: https://issues.apache.org/jira/browse/FLINK-8917
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission
>Affects Versions: 1.4.0
>Reporter: jianran.tfh
>Priority: Minor
> Fix For: 1.4.3
>
>
> FlinkMiniCluster default createHighAvailabilityServices is not same as 
> ClusterClient,
> the FlinkMiniCluster used 
> HighAvailabilityServicesUtils.createAvailableOrEmbeddedServices
> but the ClusterClient used
> HighAvailabilityServicesUtils.createHighAvailabilityServices,so if you use 
> the flink-1.4 in zeppelin,the job submission will be failed with the follow 
> msg: 
> Discard message 
> LeaderSessionMessage(----,SubmitJob(JobGraph(jobId:
>  33d8e7d74aa48f76a1622d4d8f78105e),EXECUTION_RESULT_AND_STATE_CHANGES)) 
> because the expected leader session ID 87efb7ca-b761-4977-9696-d521bc178703 
> did not equal the received leader session ID 
> ----.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5686: [FLINK-8917] [Job-Submission] FlinkMiniCluster def...

2018-03-12 Thread jianran
GitHub user jianran opened a pull request:

https://github.com/apache/flink/pull/5686

[FLINK-8917] [Job-Submission] FlinkMiniCluster default 
createHighAvailabilityServices is not same as ClusterClient


## What is the purpose of the change

the FlinkMiniCluster used
HighAvailabilityServicesUtils.createAvailableOrEmbeddedServices to create 
highAvailabilityServices,so the FlinkMiniCluster's highAvailabilityServices 
is EmbeddedHaServices, but the ClusterClient used
HighAvailabilityServicesUtils.createHighAvailabilityServices,so the so 
ClusterClient's highAvailabilityServices is StandaloneHaServices; the  
highAvailabilityServicess are different, if you use the flink-1.4 in 
zeppelin,the zeppelin use FlinkMiniCluster to submit job, the job submission 
will be failed with the follow msg:

Discard message 
LeaderSessionMessage(----,SubmitJob(JobGraph(jobId:
 33d8e7d74aa48f76a1622d4d8f78105e),EXECUTION_RESULT_AND_STATE_CHANGES)) because 
the expected leader session ID 87efb7ca-b761-4977-9696-d521bc178703 did not 
equal the received leader session ID ----.

so this pull request will to change the FlinkMiniCluster use 
HighAvailabilityServicesUtils.createHighAvailabilityServices to create 
StandaloneHaServices as same as the ClusterClient created
## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change


This change is a trivial rework / code cleanup without any test coverage.

This change is already covered by existing tests, such as 
LocalFlinkMiniClusterITCase.

## Does this pull request potentially affect one of the following parts:

   - Dependencies (does it add or upgrade a dependency):no 
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
  - The serializers:  don't know
  - The runtime per-record code paths (performance sensitive):  don't know
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
  - The S3 file system connector:no

## Documentation
  - Does this pull request introduce a new feature? no

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jianran/flink release-1.4

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5686.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5686


commit 4aa4e8d27f602f7dfeadc07c5b76498afb044f06
Author: jianran.tfh 
Date:   2018-03-13T01:08:19Z

[FLINK-8917] [Job-Submission] FlinkMiniCluster haService not same as 
ClusterClient




---


[jira] [Comment Edited] (FLINK-7795) Utilize error-prone to discover common coding mistakes

2018-03-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345955#comment-16345955
 ] 

Ted Yu edited comment on FLINK-7795 at 3/13/18 1:27 AM:


error-prone has JDK 8 dependency.


was (Author: yuzhih...@gmail.com):
error-prone has JDK 8 dependency .

> Utilize error-prone to discover common coding mistakes
> --
>
> Key: FLINK-7795
> URL: https://issues.apache.org/jira/browse/FLINK-7795
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Ted Yu
>Priority: Major
>
> http://errorprone.info/ is a tool which detects common coding mistakes.
> We should incorporate into Flink build process.
> Here are the dependencies:
> {code}
> 
>   com.google.errorprone
>   error_prone_annotation
>   ${error-prone.version}
>   provided
> 
> 
>   
>   com.google.auto.service
>   auto-service
>   1.0-rc3
>   true
> 
> 
>   com.google.errorprone
>   error_prone_check_api
>   ${error-prone.version}
>   provided
>   
> 
>   com.google.code.findbugs
>   jsr305
> 
>   
> 
> 
>   com.google.errorprone
>   javac
>   9-dev-r4023-3
>   provided
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8708) Unintended integer division in StandaloneThreadedGenerator

2018-03-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-8708:
--
Description: 
In 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/generator/StandaloneThreadedGenerator.java
 :
{code}
double factor = (ts - lastTimeStamp) / 1000;
{code}

Proper casting should be done before the integer division

  was:
In 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/generator/StandaloneThreadedGenerator.java
 :
{code}
double factor = (ts - lastTimeStamp) / 1000;
{code}
Proper casting should be done before the integer division


> Unintended integer division in StandaloneThreadedGenerator
> --
>
> Key: FLINK-8708
> URL: https://issues.apache.org/jira/browse/FLINK-8708
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> In 
> flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/statemachine/generator/StandaloneThreadedGenerator.java
>  :
> {code}
> double factor = (ts - lastTimeStamp) / 1000;
> {code}
> Proper casting should be done before the integer division



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7521) Remove the 10MB limit from the current REST implementation.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396194#comment-16396194
 ] 

ASF GitHub Bot commented on FLINK-7521:
---

Github user GJL commented on the issue:

https://github.com/apache/flink/pull/5685
  
Maybe docs should be updated.


> Remove the 10MB limit from the current REST implementation.
> ---
>
> Key: FLINK-7521
> URL: https://issues.apache.org/jira/browse/FLINK-7521
> Project: Flink
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 1.4.0, 1.5.0, 1.6.0
>Reporter: Kostas Kloudas
>Assignee: Gary Yao
>Priority: Blocker
>  Labels: flip6
> Fix For: 1.5.0
>
>
> In the current {{AbstractRestServer}} we impose an upper bound of 10MB in the 
> states we can transfer. This is in the line {{.addLast(new 
> HttpObjectAggregator(1024 * 1024 * 10))}} of the server implementation. 
> This limit is restrictive for some of the usecases planned to use this 
> implementation (e.g. the job submission client which has to send full jars, 
> or the queryable state client which may have to receive states bigger than 
> that).
> This issue proposes the elimination of this limit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5685: [FLINK-7521][flip6] Remove the 10MB limit from the curren...

2018-03-12 Thread GJL
Github user GJL commented on the issue:

https://github.com/apache/flink/pull/5685
  
Maybe docs should be updated.


---


[jira] [Commented] (FLINK-7521) Remove the 10MB limit from the current REST implementation.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396188#comment-16396188
 ] 

ASF GitHub Bot commented on FLINK-7521:
---

GitHub user GJL opened a pull request:

https://github.com/apache/flink/pull/5685

[FLINK-7521][flip6]

## What is the purpose of the change

*Make HTTP request and response limits configurable. A relatively high 
default value is chosen (100 mb) because Netty does not allocate the upper 
limit at once.*


## Brief change log

  - *Make HTTP request and response limits configurable.*

## Verifying this change

This change added tests and can be verified as follows:
  - *Added tests to `RestServerEndpointITCase`*
  - *Manually verified that client and server limits are respected.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/GJL/flink FLINK-7521-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5685.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5685


commit aef8fa247a4c8b14f4dd7ce8f324ccd89bb2ce14
Author: zjureel 
Date:   2017-09-07T02:39:39Z

[FLINK-7521] Add config option to set the content length limit of REST 
server and client

commit ff6c7eb1127ff1870f479c1b779379cc22c9dc87
Author: gyao 
Date:   2018-03-12T14:44:27Z

[FLINK-7521][flip6] Remove RestServerEndpoint#MAX_REQUEST_SIZE_BYTES

commit a14e5935dd9132ddb43e55e357674d390ff9c597
Author: gyao 
Date:   2018-03-12T22:16:25Z

[FLINK-7521][flip6] Return HTTP 413 if request limit is exceeded.

Remove unnecessary PipelineErrorHandler from RestClient.
Rename config keys for configuring request and response limits.
Set response headers for all error responses.




> Remove the 10MB limit from the current REST implementation.
> ---
>
> Key: FLINK-7521
> URL: https://issues.apache.org/jira/browse/FLINK-7521
> Project: Flink
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 1.4.0, 1.5.0, 1.6.0
>Reporter: Kostas Kloudas
>Assignee: Gary Yao
>Priority: Blocker
>  Labels: flip6
> Fix For: 1.5.0
>
>
> In the current {{AbstractRestServer}} we impose an upper bound of 10MB in the 
> states we can transfer. This is in the line {{.addLast(new 
> HttpObjectAggregator(1024 * 1024 * 10))}} of the server implementation. 
> This limit is restrictive for some of the usecases planned to use this 
> implementation (e.g. the job submission client which has to send full jars, 
> or the queryable state client which may have to receive states bigger than 
> that).
> This issue proposes the elimination of this limit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5685: [FLINK-7521][flip6]

2018-03-12 Thread GJL
GitHub user GJL opened a pull request:

https://github.com/apache/flink/pull/5685

[FLINK-7521][flip6]

## What is the purpose of the change

*Make HTTP request and response limits configurable. A relatively high 
default value is chosen (100 mb) because Netty does not allocate the upper 
limit at once.*


## Brief change log

  - *Make HTTP request and response limits configurable.*

## Verifying this change

This change added tests and can be verified as follows:
  - *Added tests to `RestServerEndpointITCase`*
  - *Manually verified that client and server limits are respected.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / **no**)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
  - The serializers: (yes / **no** / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
  - The S3 file system connector: (yes / **no** / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / **no**)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / **not documented**)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/GJL/flink FLINK-7521-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5685.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5685


commit aef8fa247a4c8b14f4dd7ce8f324ccd89bb2ce14
Author: zjureel 
Date:   2017-09-07T02:39:39Z

[FLINK-7521] Add config option to set the content length limit of REST 
server and client

commit ff6c7eb1127ff1870f479c1b779379cc22c9dc87
Author: gyao 
Date:   2018-03-12T14:44:27Z

[FLINK-7521][flip6] Remove RestServerEndpoint#MAX_REQUEST_SIZE_BYTES

commit a14e5935dd9132ddb43e55e357674d390ff9c597
Author: gyao 
Date:   2018-03-12T22:16:25Z

[FLINK-7521][flip6] Return HTTP 413 if request limit is exceeded.

Remove unnecessary PipelineErrorHandler from RestClient.
Rename config keys for configuring request and response limits.
Set response headers for all error responses.




---


[jira] [Closed] (FLINK-8927) Eagerly release the checkpoint object created from RocksDB

2018-03-12 Thread Stefan Richter (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Richter closed FLINK-8927.
-
Resolution: Fixed

Merged in 3debf47e5d.

> Eagerly release the checkpoint object created from RocksDB
> --
>
> Key: FLINK-8927
> URL: https://issues.apache.org/jira/browse/FLINK-8927
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.5.0
>
>
> We should eagerly release the checkpoint object that is created from RocksDB, 
> because it's a {{RocksObject}} (a native resource).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5682: [FLINK-8927][state] Eagerly release the checkpoint...

2018-03-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5682


---


[jira] [Commented] (FLINK-8927) Eagerly release the checkpoint object created from RocksDB

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396103#comment-16396103
 ] 

ASF GitHub Bot commented on FLINK-8927:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5682


> Eagerly release the checkpoint object created from RocksDB
> --
>
> Key: FLINK-8927
> URL: https://issues.apache.org/jira/browse/FLINK-8927
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Sihua Zhou
>Assignee: Sihua Zhou
>Priority: Major
> Fix For: 1.5.0
>
>
> We should eagerly release the checkpoint object that is created from RocksDB, 
> because it's a {{RocksObject}} (a native resource).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8919) Add KeyedProcessFunctionWIthCleanupState

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395867#comment-16395867
 ] 

ASF GitHub Bot commented on FLINK-8919:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5680
  
shall we add a unit test?


> Add KeyedProcessFunctionWIthCleanupState
> 
>
> Key: FLINK-8919
> URL: https://issues.apache.org/jira/browse/FLINK-8919
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.6.0
>Reporter: Renjie Liu
>Assignee: Renjie Liu
>Priority: Minor
> Fix For: 1.6.0
>
>
> ProcessFunctionWithCleanupState is a useful tool and I think we also need one 
> for the new KeyedProcessFunction api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5680: [FLINK-8919] Add KeyedProcessFunctionWithCleanupState.

2018-03-12 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/5680
  
shall we add a unit test?


---


[jira] [Updated] (FLINK-8932) Mesos taskmanager should reserve port for query server

2018-03-12 Thread Jared Stehler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jared Stehler updated FLINK-8932:
-
Description: 
Currently the LaunchableMesosWorker doesn't reserve a port for the query 
server, making it challenging to use the queryable state feature in mesos.

The applicable config param is: query.server.ports

 An additional port reservation for the proxy server would likely be necessary 
as well: query.proxy.ports

  was:
Currently the LaunchableMesosWorker doesn't reserve a port for the query 
server, making it challenging to use the queryable state feature in mesos.

The applicable config param is: query.server.ports

 


> Mesos taskmanager should reserve port for query server
> --
>
> Key: FLINK-8932
> URL: https://issues.apache.org/jira/browse/FLINK-8932
> Project: Flink
>  Issue Type: Improvement
>  Components: Mesos
>Affects Versions: 1.4.0
>Reporter: Jared Stehler
>Priority: Major
>
> Currently the LaunchableMesosWorker doesn't reserve a port for the query 
> server, making it challenging to use the queryable state feature in mesos.
> The applicable config param is: query.server.ports
>  An additional port reservation for the proxy server would likely be 
> necessary as well: query.proxy.ports



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8932) Mesos taskmanager should reserve port for query server

2018-03-12 Thread Jared Stehler (JIRA)
Jared Stehler created FLINK-8932:


 Summary: Mesos taskmanager should reserve port for query server
 Key: FLINK-8932
 URL: https://issues.apache.org/jira/browse/FLINK-8932
 Project: Flink
  Issue Type: Improvement
  Components: Mesos
Affects Versions: 1.4.0
Reporter: Jared Stehler


Currently the LaunchableMesosWorker doesn't reserve a port for the query 
server, making it challenging to use the queryable state feature in mesos.

The applicable config param is: query.server.ports

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-8931) TASK_KILLING is not covered by match in TaskMonitor#whenUnhandled

2018-03-12 Thread Ted Yu (JIRA)
Ted Yu created FLINK-8931:
-

 Summary: TASK_KILLING is not covered by match in 
TaskMonitor#whenUnhandled
 Key: FLINK-8931
 URL: https://issues.apache.org/jira/browse/FLINK-8931
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu


Noticed the following :
{code}
[WARNING] 
/a/flink-1.3.3/flink-mesos/src/main/scala/org/apache/flink/mesos/scheduler/TaskMonitor.scala:157:
 warning: match may not be exhaustive.
[WARNING] It would fail on the following input: TASK_KILLING
[WARNING]   msg.status().getState match {
[WARNING]^
[WARNING] 
/a/flink-1.3.3/flink-mesos/src/main/scala/org/apache/flink/mesos/scheduler/TaskMonitor.scala:170:
 warning: match may not be exhaustive.
[WARNING] It would fail on the following input: TASK_KILLING
[WARNING]   msg.status().getState match {
[WARNING]^
{code}
It seems that TASK_KILLING should be covered by the last case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-7588) Document RocksDB tuning for spinning disks

2018-03-12 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258309#comment-16258309
 ] 

Ted Yu edited comment on FLINK-7588 at 3/12/18 7:26 PM:


bq. Be careful about whether you have enough memory to keep all bloom filters

Other than the above being tricky, the other guidelines are actionable .


was (Author: yuzhih...@gmail.com):
bq. Be careful about whether you have enough memory to keep all bloom filters


Other than the above being tricky, the other guidelines are actionable .

> Document RocksDB tuning for spinning disks
> --
>
> Key: FLINK-7588
> URL: https://issues.apache.org/jira/browse/FLINK-7588
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Ted Yu
>Priority: Major
>  Labels: performance
>
> In docs/ops/state/large_state_tuning.md , it was mentioned that:
> bq. the default configuration is tailored towards SSDs and performs 
> suboptimal on spinning disks
> We should add recommendation targeting spinning disks:
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide#difference-of-spinning-disk



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-6924) ADD LOG(X) supported in TableAPI

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395641#comment-16395641
 ] 

ASF GitHub Bot commented on FLINK-6924:
---

Github user buptljy commented on a diff in the pull request:

https://github.com/apache/flink/pull/5638#discussion_r173897783
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ---
@@ -1130,4 +1130,13 @@ object concat_ws {
   }
 }
 
+object log {
+  def apply(base: Expression, antilogarithm: Expression): Expression = {
+Log(base, antilogarithm)
+  }
+  def apply(antilogarithm: Expression): Expression = {
+new Log(antilogarithm)
--- End diff --

1. I've tried it before, but it seems that we can't restructure log(x) and 
ln(x) together because the expression in sql will be reflected directly as an 
instance of case class. Please let me know if you figure it out.
2. Actually we should use log(base, antilogarithm).


> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: zjuwangg
>Priority: Major
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5638: [FLINK-6924][table]ADD LOG(X) supported in TableAP...

2018-03-12 Thread buptljy
Github user buptljy commented on a diff in the pull request:

https://github.com/apache/flink/pull/5638#discussion_r173897783
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ---
@@ -1130,4 +1130,13 @@ object concat_ws {
   }
 }
 
+object log {
+  def apply(base: Expression, antilogarithm: Expression): Expression = {
+Log(base, antilogarithm)
+  }
+  def apply(antilogarithm: Expression): Expression = {
+new Log(antilogarithm)
--- End diff --

1. I've tried it before, but it seems that we can't restructure log(x) and 
ln(x) together because the expression in sql will be reflected directly as an 
instance of case class. Please let me know if you figure it out.
2. Actually we should use log(base, antilogarithm).


---


[jira] [Commented] (FLINK-6924) ADD LOG(X) supported in TableAPI

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395620#comment-16395620
 ] 

ASF GitHub Bot commented on FLINK-6924:
---

Github user buptljy commented on the issue:

https://github.com/apache/flink/pull/5638
  
@walterddr I'm not able to add a validation test because I am blocked by 
[FLINK-8930](https://issues.apache.org/jira/browse/FLINK-8930). 


> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: zjuwangg
>Priority: Major
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5638: [FLINK-6924][table]ADD LOG(X) supported in TableAPI

2018-03-12 Thread buptljy
Github user buptljy commented on the issue:

https://github.com/apache/flink/pull/5638
  
@walterddr I'm not able to add a validation test because I am blocked by 
[FLINK-8930](https://issues.apache.org/jira/browse/FLINK-8930). 


---


[jira] [Created] (FLINK-8930) TableApi validation test in ScalarFunctionsValidationTest doesn't work

2018-03-12 Thread Wind (JIRA)
Wind created FLINK-8930:
---

 Summary: TableApi validation test in ScalarFunctionsValidationTest 
doesn't work
 Key: FLINK-8930
 URL: https://issues.apache.org/jira/browse/FLINK-8930
 Project: Flink
  Issue Type: Bug
  Components: Table API  SQL
Reporter: Wind


I'm wring a validation test for 
[FLINK-6924|https://issues.apache.org/jira/browse/FLINK-6924] in 
org.apache.flink.table.expressions.validation.ScalarFunctionsValidationTest. 
However, I find that the table api is not truely executed in function 
"testTableApi", which is different from "testSqlApi". So we can only test 
exceptions which are thrown in "addTableApiTestExpr" like "ValidationException" 
because it is thrown during "select" operation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8854) Mapping of SchemaValidator.deriveFieldMapping() is incorrect.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395581#comment-16395581
 ] 

ASF GitHub Bot commented on FLINK-8854:
---

Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173888555
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

```
physical schema ==mapping=> "intermediate schema" ==timestamp extraction 
and projection=> logical schema
```
Maybe we should consider eliminating the "intermedia schema" in the future?


> Mapping of SchemaValidator.deriveFieldMapping() is incorrect.
> -
>
> Key: FLINK-8854
> URL: https://issues.apache.org/jira/browse/FLINK-8854
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0, 1.6.0
>
>
> The field mapping returned by {{SchemaValidator.deriveFieldMapping()}} is not 
> correct.
> It should not only include all fields of the table schema, but also all 
> fields of the format schema (mapped to themselves). Otherwise, it is not 
> possible to use a timestamp extractor on a field that is not in table schema. 
> For example this configuration would fail:
> {code}
> sources:
>   - name: TaxiRides
> schema:
>   - name: rideId
> type: LONG
>   - name: rowTime
> type: TIMESTAMP
> rowtime:
>   timestamps:
> type: "from-field"
> from: "rideTime"
>   watermarks:
> type: "periodic-bounded"
> delay: "6"
> connector:
>   
> format:
>   property-version: 1
>   type: json
>   schema: "ROW(rideId LONG, rideTime TIMESTAMP)"
> {code}
> because {{rideTime}} is not in the table schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5662: [FLINK-8854] [table] Fix schema mapping with time ...

2018-03-12 Thread xccui
Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173888555
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

```
physical schema ==mapping=> "intermediate schema" ==timestamp extraction 
and projection=> logical schema
```
Maybe we should consider eliminating the "intermedia schema" in the future?


---


[jira] [Commented] (FLINK-6924) ADD LOG(X) supported in TableAPI

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395565#comment-16395565
 ] 

ASF GitHub Bot commented on FLINK-6924:
---

Github user buptljy commented on the issue:

https://github.com/apache/flink/pull/5638
  
@suez1224 Docs are added in both java and scala.


> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: zjuwangg
>Priority: Major
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5638: [FLINK-6924][table]ADD LOG(X) supported in TableAPI

2018-03-12 Thread buptljy
Github user buptljy commented on the issue:

https://github.com/apache/flink/pull/5638
  
@suez1224 Docs are added in both java and scala.


---


[jira] [Commented] (FLINK-8537) Add a Kafka table source factory with Avro format support

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395562#comment-16395562
 ] 

ASF GitHub Bot commented on FLINK-8537:
---

Github user xccui commented on the issue:

https://github.com/apache/flink/pull/5610
  
Hi @twalthr, I've rebased this PR and fixed some problems.


> Add a Kafka table source factory with Avro format support
> -
>
> Key: FLINK-8537
> URL: https://issues.apache.org/jira/browse/FLINK-8537
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Xingcan Cui
>Priority: Major
>
> Similar to {{CSVTableSourceFactory}} a Kafka table source factory should be 
> added. This issue includes creating a {{Avro}} descriptor with validation 
> that can be used for other connectors as well. It is up for discussion if we 
> want to split the KafkaAvroTableSource into connector and format such that we 
> can reuse the format for other table sources as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #5610: [FLINK-8537][table]Add a Kafka table source factory with ...

2018-03-12 Thread xccui
Github user xccui commented on the issue:

https://github.com/apache/flink/pull/5610
  
Hi @twalthr, I've rebased this PR and fixed some problems.


---


[jira] [Commented] (FLINK-8854) Mapping of SchemaValidator.deriveFieldMapping() is incorrect.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395556#comment-16395556
 ] 

ASF GitHub Bot commented on FLINK-8854:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173881141
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

@fhueske what do you think about this whole mapping business?


> Mapping of SchemaValidator.deriveFieldMapping() is incorrect.
> -
>
> Key: FLINK-8854
> URL: https://issues.apache.org/jira/browse/FLINK-8854
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0, 1.6.0
>
>
> The field mapping returned by {{SchemaValidator.deriveFieldMapping()}} is not 
> correct.
> It should not only include all fields of the table schema, but also all 
> fields of the format schema (mapped to themselves). Otherwise, it is not 
> possible to use a timestamp extractor on a field that is not in table schema. 
> For example this configuration would fail:
> {code}
> sources:
>   - name: TaxiRides
> schema:
>   - name: rideId
> type: LONG
>   - name: rowTime
> type: TIMESTAMP
> rowtime:
>   timestamps:
> type: "from-field"
> from: "rideTime"
>   watermarks:
> type: "periodic-bounded"
> delay: "6"
> connector:
>   
> format:
>   property-version: 1
>   type: json
>   schema: "ROW(rideId LONG, rideTime TIMESTAMP)"
> {code}
> because {{rideTime}} is not in the table schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5662: [FLINK-8854] [table] Fix schema mapping with time ...

2018-03-12 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173881141
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

@fhueske what do you think about this whole mapping business?


---


[jira] [Commented] (FLINK-8854) Mapping of SchemaValidator.deriveFieldMapping() is incorrect.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395545#comment-16395545
 ] 

ASF GitHub Bot commented on FLINK-8854:
---

Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173879267
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

Well, according to the current implementation, you are right. But I still 
feel uncomfortable about that since we actually mix the physical schema (format 
schema) and the logical schema (table schema) into the same map. Do you think 
it's necessary to make some changes here?


> Mapping of SchemaValidator.deriveFieldMapping() is incorrect.
> -
>
> Key: FLINK-8854
> URL: https://issues.apache.org/jira/browse/FLINK-8854
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0, 1.6.0
>
>
> The field mapping returned by {{SchemaValidator.deriveFieldMapping()}} is not 
> correct.
> It should not only include all fields of the table schema, but also all 
> fields of the format schema (mapped to themselves). Otherwise, it is not 
> possible to use a timestamp extractor on a field that is not in table schema. 
> For example this configuration would fail:
> {code}
> sources:
>   - name: TaxiRides
> schema:
>   - name: rideId
> type: LONG
>   - name: rowTime
> type: TIMESTAMP
> rowtime:
>   timestamps:
> type: "from-field"
> from: "rideTime"
>   watermarks:
> type: "periodic-bounded"
> delay: "6"
> connector:
>   
> format:
>   property-version: 1
>   type: json
>   schema: "ROW(rideId LONG, rideTime TIMESTAMP)"
> {code}
> because {{rideTime}} is not in the table schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5662: [FLINK-8854] [table] Fix schema mapping with time ...

2018-03-12 Thread xccui
Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173879267
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

Well, according to the current implementation, you are right. But I still 
feel uncomfortable about that since we actually mix the physical schema (format 
schema) and the logical schema (table schema) into the same map. Do you think 
it's necessary to make some changes here?


---


[jira] [Commented] (FLINK-8783) Test instability SlotPoolRpcTest.testExtraSlotsAreKept

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395525#comment-16395525
 ] 

ASF GitHub Bot commented on FLINK-8783:
---

GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/5684

[FLINK-8783] [tests] Harden SlotPoolRpcTest

## What is the purpose of the change

Wait for releasing of timed out pending slot requests before checking the
number of pending slots requests.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink hardenSlotPoolRpcTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5684.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5684


commit 5f3e1a42e249fd7187d846ae4f851a032f9017f0
Author: Till Rohrmann 
Date:   2018-03-12T17:04:38Z

[FLINK-8783] [tests] Harden SlotPoolRpcTest

Wait for releasing of timed out pending slot requests before checking the
number of pending slots requests.




> Test instability SlotPoolRpcTest.testExtraSlotsAreKept
> --
>
> Key: FLINK-8783
> URL: https://issues.apache.org/jira/browse/FLINK-8783
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Assignee: Till Rohrmann
>Priority: Blocker
> Fix For: 1.5.0
>
>
> [https://travis-ci.org/GJL/flink/jobs/346206290]
>  {noformat}
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.784 sec <<< 
> FAILURE! - in org.apache.flink.runtime.jobmaster.slotpool.SlotPoolRpcTest
> testExtraSlotsAreKept(org.apache.flink.runtime.jobmaster.slotpool.SlotPoolRpcTest)
>  Time elapsed: 0.016 sec <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<1>
>  at org.junit.Assert.fail(Assert.java:88)
>  at org.junit.Assert.failNotEquals(Assert.java:834)
>  at org.junit.Assert.assertEquals(Assert.java:645)
>  at org.junit.Assert.assertEquals(Assert.java:631)
>  at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolRpcTest.testExtraSlotsAreKept(SlotPoolRpcTest.java:267)
> {noformat}
> I reproduced this in IntelliJ by configuring 50 consecutive runs of 
> {{testExtraSlotsAreKept}}. On my machine the 8th execution fails sporadically.
> commit: eeac022f0538e0979e6ad4eb06a2d1031cbd0146



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5684: [FLINK-8783] [tests] Harden SlotPoolRpcTest

2018-03-12 Thread tillrohrmann
GitHub user tillrohrmann opened a pull request:

https://github.com/apache/flink/pull/5684

[FLINK-8783] [tests] Harden SlotPoolRpcTest

## What is the purpose of the change

Wait for releasing of timed out pending slot requests before checking the
number of pending slots requests.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tillrohrmann/flink hardenSlotPoolRpcTest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5684.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5684


commit 5f3e1a42e249fd7187d846ae4f851a032f9017f0
Author: Till Rohrmann 
Date:   2018-03-12T17:04:38Z

[FLINK-8783] [tests] Harden SlotPoolRpcTest

Wait for releasing of timed out pending slot requests before checking the
number of pending slots requests.




---


[jira] [Created] (FLINK-8929) "UnknownTaskExecutorException: No TaskExecutor registered" when having tab open for taskmanager that does not exist (anymore)

2018-03-12 Thread Florian Schmidt (JIRA)
Florian Schmidt created FLINK-8929:
--

 Summary: "UnknownTaskExecutorException: No TaskExecutor 
registered" when having tab open for taskmanager that does not exist (anymore)
 Key: FLINK-8929
 URL: https://issues.apache.org/jira/browse/FLINK-8929
 Project: Flink
  Issue Type: Bug
Reporter: Florian Schmidt


When having a browser tab open for logs of task manager that does not exist 
anymore (in my case 
http://localhost:8081/#/taskmanager/a460742356b7f02e448c6e785d34bf0c/log) it 
will print the following exception in the log at 
{code}
flink-florianschmidt-standalonesession-0-Florians-MBP.fritz.box.log
{code}

{code}
2018-03-12 17:34:47,420 ERROR 
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerDetailsHandler  - 
Implementation error: Unhandled exception.
org.apache.flink.runtime.resourcemanager.exceptions.UnknownTaskExecutorException:
 No TaskExecutor registered under a460742356b7f02e448c6e785d34bf0c.
at 
org.apache.flink.runtime.resourcemanager.ResourceManager.requestTaskManagerInfo(ResourceManager.java:538)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:210)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleMessage(FencedAkkaRpcActor.java:66)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$onReceive$1(AkkaRpcActor.java:132)
at 
akka.actor.ActorCell$$anonfun$become$1.applyOrElse(ActorCell.scala:544)
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
at akka.actor.ActorCell.invoke(ActorCell.scala:495)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:224)
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7589) org.apache.http.ConnectionClosedException: Premature end of Content-Length delimited message body (expected: 159764230; received: 64638536)

2018-03-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395479#comment-16395479
 ] 

Steve Loughran commented on FLINK-7589:
---

We've been thinking of cutting a shaded version of the hadoop-cloud module, 
bonded to the already-shaded AWS SDK (i.e. it will still be brittle to AWS SDK 
changes. If you want to contribute that to Hadoop 3.2+ it could be backported 
to the rest of the 3.x release line. At the very least, you'd get to learn what 
it took to isolate things.

> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 159764230; received: 64638536)
> ---
>
> Key: FLINK-7589
> URL: https://issues.apache.org/jira/browse/FLINK-7589
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Priority: Blocker
> Fix For: 1.4.0, 1.3.4
>
>
> When I tried to resume a Flink job from a savepoint with different 
> parallelism, I ran into this error. And the resume failed.
> {code:java}
> 2017-09-05 21:53:57,317 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- returnsivs -> 
> Sink: xxx (7/12) (0da16ec908fc7b9b16a5c2cf1aa92947) switched from RUNNING to 
> FAILED.
> org.apache.http.ConnectionClosedException: Premature end of Content-Length 
> delimited message body (expected: 159764230; received: 64638536
>   at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:180)
>   at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at 
> com.amazonaws.util.ContentLengthValidationInputStream.read(ContentLengthValidationInputStream.java:77)
>   at java.io.FilterInputStream.read(FilterInputStream.java:133)
>   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:160)
>   at java.io.DataInputStream.read(DataInputStream.java:149)
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopDataInputStream.read(HadoopDataInputStream.java:72)
>   at 
> org.apache.flink.core.fs.FSDataInputStreamWrapper.read(FSDataInputStreamWrapper.java:61)
>   at 
> org.apache.flink.runtime.util.NonClosingStreamDecorator.read(NonClosingStreamDecorator.java:47)
>   at java.io.DataInputStream.readFully(DataInputStream.java:195)
>   at java.io.DataInputStream.readLong(DataInputStream.java:416)
>   at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.deserialize(LongSerializer.java:68)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimer$TimerSerializer.deserialize(InternalTimer.java:156)
>   at 
> org.apache.flink.streaming.api.operators.HeapInternalTimerService.restoreTimersForKeyGroup(HeapInternalTimerService.java:345)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimeServiceManager.restoreStateForKeyGroup(InternalTimeServiceManager.java:141)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:496)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:104)
>   at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:251)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:676)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:663)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:252)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5662: [FLINK-8854] [table] Fix schema mapping with time ...

2018-03-12 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173863375
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

The question is what should a rowtime attribute field (or a custom 
extractor) reference? The input or the current schema? I think it should 
reference the input thus all fields (even the renamed ones) need to be present 
in the mapping.


---


[jira] [Commented] (FLINK-8854) Mapping of SchemaValidator.deriveFieldMapping() is incorrect.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395476#comment-16395476
 ] 

ASF GitHub Bot commented on FLINK-8854:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173863375
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

The question is what should a rowtime attribute field (or a custom 
extractor) reference? The input or the current schema? I think it should 
reference the input thus all fields (even the renamed ones) need to be present 
in the mapping.


> Mapping of SchemaValidator.deriveFieldMapping() is incorrect.
> -
>
> Key: FLINK-8854
> URL: https://issues.apache.org/jira/browse/FLINK-8854
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0, 1.6.0
>
>
> The field mapping returned by {{SchemaValidator.deriveFieldMapping()}} is not 
> correct.
> It should not only include all fields of the table schema, but also all 
> fields of the format schema (mapped to themselves). Otherwise, it is not 
> possible to use a timestamp extractor on a field that is not in table schema. 
> For example this configuration would fail:
> {code}
> sources:
>   - name: TaxiRides
> schema:
>   - name: rideId
> type: LONG
>   - name: rowTime
> type: TIMESTAMP
> rowtime:
>   timestamps:
> type: "from-field"
> from: "rideTime"
>   watermarks:
> type: "periodic-bounded"
> delay: "6"
> connector:
>   
> format:
>   property-version: 1
>   type: json
>   schema: "ROW(rideId LONG, rideTime TIMESTAMP)"
> {code}
> because {{rideTime}} is not in the table schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8795) Scala shell broken for Flip6

2018-03-12 Thread Kedar Mhaswade (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395472#comment-16395472
 ] 

Kedar Mhaswade commented on FLINK-8795:
---

Thanks [~Zentol]! I can confirm that the workaround works.

> Scala shell broken for Flip6
> 
>
> Key: FLINK-8795
> URL: https://issues.apache.org/jira/browse/FLINK-8795
> Project: Flink
>  Issue Type: Bug
>Reporter: kant kodali
>Priority: Blocker
> Fix For: 1.5.0
>
>
> I am trying to run the simple code below after building everything from 
> Flink's github master branch for various reasons. I get an exception below 
> and I wonder what runs on port 9065? and How to fix this exception?
> I followed the instructions from the Flink master branch so I did the 
> following.
> {code:java}
> git clone https://github.com/apache/flink.git 
> cd flink mvn clean package -DskipTests 
> cd build-target
>  ./bin/start-scala-shell.sh local{code}
> {{And Here is the code I ran}}
> {code:java}
> val dataStream = senv.fromElements(1, 2, 3, 4)
> dataStream.countWindowAll(2).sum(0).print()
> senv.execute("My streaming program"){code}
> {{And I finally get this exception}}
> {code:java}
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to 
> submit JobGraph. at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$18(RestClusterClient.java:306)
>  at 
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
>  at 
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
>  at 
> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$222(RestClient.java:196)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:268)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:284)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  at java.lang.Thread.run(Thread.java:745) Caused by: 
> java.util.concurrent.CompletionException: java.net.ConnectException: 
> Connection refused: localhost/127.0.0.1:9065 at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>  at 
> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943) 
> at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
>  ... 16 more Caused by: java.net.ConnectException: Connection refused: 
> localhost/127.0.0.1:9065 at sun.nio.ch.SocketChannelImpl.checkConnect(Native 
> Method) at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at 
> org.apache.flink.shaded.netty4.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:281){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8854) Mapping of SchemaValidator.deriveFieldMapping() is incorrect.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395467#comment-16395467
 ] 

ASF GitHub Bot commented on FLINK-8854:
---

Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173860901
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

This "name" to "name" mapping should not exist since we've already 
explicitly defined the "fruit-name" to "name" mapping.


> Mapping of SchemaValidator.deriveFieldMapping() is incorrect.
> -
>
> Key: FLINK-8854
> URL: https://issues.apache.org/jira/browse/FLINK-8854
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0, 1.6.0
>
>
> The field mapping returned by {{SchemaValidator.deriveFieldMapping()}} is not 
> correct.
> It should not only include all fields of the table schema, but also all 
> fields of the format schema (mapped to themselves). Otherwise, it is not 
> possible to use a timestamp extractor on a field that is not in table schema. 
> For example this configuration would fail:
> {code}
> sources:
>   - name: TaxiRides
> schema:
>   - name: rideId
> type: LONG
>   - name: rowTime
> type: TIMESTAMP
> rowtime:
>   timestamps:
> type: "from-field"
> from: "rideTime"
>   watermarks:
> type: "periodic-bounded"
> delay: "6"
> connector:
>   
> format:
>   property-version: 1
>   type: json
>   schema: "ROW(rideId LONG, rideTime TIMESTAMP)"
> {code}
> because {{rideTime}} is not in the table schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5662: [FLINK-8854] [table] Fix schema mapping with time ...

2018-03-12 Thread xccui
Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173860901
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaJsonTableSourceFactoryTestBase.java
 ---
@@ -89,9 +94,10 @@ private void testTableSource(FormatDescriptor format) {
// construct table source using a builder
 
final Map tableJsonMapping = new HashMap<>();
+   tableJsonMapping.put("name", "name");
--- End diff --

This "name" to "name" mapping should not exist since we've already 
explicitly defined the "fruit-name" to "name" mapping.


---


[jira] [Commented] (FLINK-8854) Mapping of SchemaValidator.deriveFieldMapping() is incorrect.

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395455#comment-16395455
 ] 

ASF GitHub Bot commented on FLINK-8854:
---

Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173857218
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/SchemaValidator.scala
 ---
@@ -148,6 +148,13 @@ object SchemaValidator {
 
 val schema = properties.getTableSchema(SCHEMA)
 
+// add all source fields first because rowtime might reference one of 
them
+toScala(sourceSchema).map(_.getColumnNames).foreach { names =>
--- End diff --

I think we should first remove the added source fields before adding the 
explicit mappings with the following snippet. 
```
// add explicit mapping
case Some(source) =>
// should add mapping.remove(source)
mapping.put(name, source)
```


> Mapping of SchemaValidator.deriveFieldMapping() is incorrect.
> -
>
> Key: FLINK-8854
> URL: https://issues.apache.org/jira/browse/FLINK-8854
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Blocker
> Fix For: 1.5.0, 1.6.0
>
>
> The field mapping returned by {{SchemaValidator.deriveFieldMapping()}} is not 
> correct.
> It should not only include all fields of the table schema, but also all 
> fields of the format schema (mapped to themselves). Otherwise, it is not 
> possible to use a timestamp extractor on a field that is not in table schema. 
> For example this configuration would fail:
> {code}
> sources:
>   - name: TaxiRides
> schema:
>   - name: rideId
> type: LONG
>   - name: rowTime
> type: TIMESTAMP
> rowtime:
>   timestamps:
> type: "from-field"
> from: "rideTime"
>   watermarks:
> type: "periodic-bounded"
> delay: "6"
> connector:
>   
> format:
>   property-version: 1
>   type: json
>   schema: "ROW(rideId LONG, rideTime TIMESTAMP)"
> {code}
> because {{rideTime}} is not in the table schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5662: [FLINK-8854] [table] Fix schema mapping with time ...

2018-03-12 Thread xccui
Github user xccui commented on a diff in the pull request:

https://github.com/apache/flink/pull/5662#discussion_r173857218
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/SchemaValidator.scala
 ---
@@ -148,6 +148,13 @@ object SchemaValidator {
 
 val schema = properties.getTableSchema(SCHEMA)
 
+// add all source fields first because rowtime might reference one of 
them
+toScala(sourceSchema).map(_.getColumnNames).foreach { names =>
--- End diff --

I think we should first remove the added source fields before adding the 
explicit mappings with the following snippet. 
```
// add explicit mapping
case Some(source) =>
// should add mapping.remove(source)
mapping.put(name, source)
```


---


[jira] [Closed] (FLINK-8853) SQL Client cannot emit query results that contain a rowtime attribute

2018-03-12 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-8853.
---
   Resolution: Duplicate
Fix Version/s: (was: 1.5.0)

Will be fixed as part of FLINK-8850.

> SQL Client cannot emit query results that contain a rowtime attribute
> -
>
> Key: FLINK-8853
> URL: https://issues.apache.org/jira/browse/FLINK-8853
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Blocker
>
> Emitting a query result that contains a rowtime attribute fails with the 
> following exception:
> {code:java}
> Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
> java.lang.Long
>     at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.serialize(LongSerializer.java:27)
>     at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.serialize(RowSerializer.java:160)
>     at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.serialize(RowSerializer.java:46)
>     at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.serialize(TupleSerializer.java:125)
>     at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.serialize(TupleSerializer.java:30)
>     at 
> org.apache.flink.streaming.experimental.CollectSink.invoke(CollectSink.java:66)
>     ... 44 more{code}
> The problem is cause by the {{ResultStore}} which configures the 
> {{CollectionSink}} with the field types obtained from the {{TableSchema}}. 
> The type of the rowtime field is a {{TimeIndicatorType}} which is serialized 
> as Long. However, in the query result it is represented as Timestamp. Hence, 
> the type must be replaced by a {{SqlTimeTypeInfo}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-8807) ZookeeperCompleted checkpoint store can get stuck in infinite loop

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-8807:


> ZookeeperCompleted checkpoint store can get stuck in infinite loop
> --
>
> Key: FLINK-8807
> URL: https://issues.apache.org/jira/browse/FLINK-8807
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.3.3, 1.5.0, 1.4.3
>
>
> This code: 
> https://github.com/apache/flink/blob/9071e3befb8c279f73c3094c9f6bddc0e7cce9e5/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java#L201
>  can be stuck forever if at least one checkpoint is not readable because 
> {{CompletedCheckpoint}} does not have a proper {{equals()}}/{{hashCode()}} 
> anymore.
> We have to fix this and also add a unit test that verifies the loop still 
> works if we make one snapshot unreadable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8807) ZookeeperCompleted checkpoint store can get stuck in infinite loop

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-8807.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.4)
   1.3.3

> ZookeeperCompleted checkpoint store can get stuck in infinite loop
> --
>
> Key: FLINK-8807
> URL: https://issues.apache.org/jira/browse/FLINK-8807
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.3.3, 1.5.0, 1.4.3
>
>
> This code: 
> https://github.com/apache/flink/blob/9071e3befb8c279f73c3094c9f6bddc0e7cce9e5/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java#L201
>  can be stuck forever if at least one checkpoint is not readable because 
> {{CompletedCheckpoint}} does not have a proper {{equals()}}/{{hashCode()}} 
> anymore.
> We have to fix this and also add a unit test that verifies the loop still 
> works if we make one snapshot unreadable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-8807) ZookeeperCompleted checkpoint store can get stuck in infinite loop

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-8807:


> ZookeeperCompleted checkpoint store can get stuck in infinite loop
> --
>
> Key: FLINK-8807
> URL: https://issues.apache.org/jira/browse/FLINK-8807
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0, 1.4.3, 1.3.4
>
>
> This code: 
> https://github.com/apache/flink/blob/9071e3befb8c279f73c3094c9f6bddc0e7cce9e5/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java#L201
>  can be stuck forever if at least one checkpoint is not readable because 
> {{CompletedCheckpoint}} does not have a proper {{equals()}}/{{hashCode()}} 
> anymore.
> We have to fix this and also add a unit test that verifies the loop still 
> works if we make one snapshot unreadable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-8807) ZookeeperCompleted checkpoint store can get stuck in infinite loop

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-8807:


> ZookeeperCompleted checkpoint store can get stuck in infinite loop
> --
>
> Key: FLINK-8807
> URL: https://issues.apache.org/jira/browse/FLINK-8807
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0, 1.4.3, 1.3.4
>
>
> This code: 
> https://github.com/apache/flink/blob/9071e3befb8c279f73c3094c9f6bddc0e7cce9e5/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java#L201
>  can be stuck forever if at least one checkpoint is not readable because 
> {{CompletedCheckpoint}} does not have a proper {{equals()}}/{{hashCode()}} 
> anymore.
> We have to fix this and also add a unit test that verifies the loop still 
> works if we make one snapshot unreadable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7382) Broken links in `Apache Flink Documentation` page

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7382.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Broken links in `Apache Flink Documentation`  page
> --
>
> Key: FLINK-7382
> URL: https://issues.apache.org/jira/browse/FLINK-7382
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Hai Zhou UTC+8
>Assignee: Hai Zhou UTC+8
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> Some links in the * External Resources * section are Broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8807) ZookeeperCompleted checkpoint store can get stuck in infinite loop

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-8807.
--
   Resolution: Fixed
Fix Version/s: 1.4.3

> ZookeeperCompleted checkpoint store can get stuck in infinite loop
> --
>
> Key: FLINK-8807
> URL: https://issues.apache.org/jira/browse/FLINK-8807
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0, 1.4.3, 1.3.4
>
>
> This code: 
> https://github.com/apache/flink/blob/9071e3befb8c279f73c3094c9f6bddc0e7cce9e5/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java#L201
>  can be stuck forever if at least one checkpoint is not readable because 
> {{CompletedCheckpoint}} does not have a proper {{equals()}}/{{hashCode()}} 
> anymore.
> We have to fix this and also add a unit test that verifies the loop still 
> works if we make one snapshot unreadable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8807) ZookeeperCompleted checkpoint store can get stuck in infinite loop

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-8807.
--
   Resolution: Fixed
Fix Version/s: 1.3.4

> ZookeeperCompleted checkpoint store can get stuck in infinite loop
> --
>
> Key: FLINK-8807
> URL: https://issues.apache.org/jira/browse/FLINK-8807
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.5.0
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Blocker
> Fix For: 1.5.0, 1.3.4
>
>
> This code: 
> https://github.com/apache/flink/blob/9071e3befb8c279f73c3094c9f6bddc0e7cce9e5/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/ZooKeeperCompletedCheckpointStore.java#L201
>  can be stuck forever if at least one checkpoint is not readable because 
> {{CompletedCheckpoint}} does not have a proper {{equals()}}/{{hashCode()}} 
> anymore.
> We have to fix this and also add a unit test that verifies the loop still 
> works if we make one snapshot unreadable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7127) Remove unnecessary null check or add null check

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7127:


> Remove unnecessary null check or add null check
> ---
>
> Key: FLINK-7127
> URL: https://issues.apache.org/jira/browse/FLINK-7127
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Dmitrii Kniazev
>Priority: Trivial
>  Labels: starter
> Fix For: 1.4.0, 1.3.4
>
>
> In {{HeapKeyedStateBackend#snapshot}} we have:
> {code}
> for (Map.Entry> kvState : stateTables.entrySet()) 
> {
>   // 1) Here we don't check for null
>   metaInfoSnapshots.add(kvState.getValue().getMetaInfo().snapshot());
>   kVStateToId.put(kvState.getKey(), kVStateToId.size());
>   // 2) Here we check for null
>   StateTable stateTable = kvState.getValue();
>   if (null != stateTable) {
>   cowStateStableSnapshots.put(stateTable, 
> stateTable.createSnapshot());
>   }
> }
> {code}
> Either this can lead to a NPE and we should check it in 1) or we remove the 
> null check in 2). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7382) Broken links in `Apache Flink Documentation` page

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7382:


> Broken links in `Apache Flink Documentation`  page
> --
>
> Key: FLINK-7382
> URL: https://issues.apache.org/jira/browse/FLINK-7382
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Hai Zhou UTC+8
>Assignee: Hai Zhou UTC+8
>Priority: Minor
> Fix For: 1.4.0, 1.3.3
>
>
> Some links in the * External Resources * section are Broken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7127) Remove unnecessary null check or add null check

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7127.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Remove unnecessary null check or add null check
> ---
>
> Key: FLINK-7127
> URL: https://issues.apache.org/jira/browse/FLINK-7127
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Reporter: Ufuk Celebi
>Assignee: Dmitrii Kniazev
>Priority: Trivial
>  Labels: starter
> Fix For: 1.3.4, 1.4.0
>
>
> In {{HeapKeyedStateBackend#snapshot}} we have:
> {code}
> for (Map.Entry> kvState : stateTables.entrySet()) 
> {
>   // 1) Here we don't check for null
>   metaInfoSnapshots.add(kvState.getValue().getMetaInfo().snapshot());
>   kVStateToId.put(kvState.getKey(), kVStateToId.size());
>   // 2) Here we check for null
>   StateTable stateTable = kvState.getValue();
>   if (null != stateTable) {
>   cowStateStableSnapshots.put(stateTable, 
> stateTable.createSnapshot());
>   }
> }
> {code}
> Either this can lead to a NPE and we should check it in 1) or we remove the 
> null check in 2). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7670) typo in docs runtime section

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7670.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> typo in docs runtime section
> 
>
> Key: FLINK-7670
> URL: https://issues.apache.org/jira/browse/FLINK-7670
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.3.2
>Reporter: Kewei SHANG
>Priority: Minor
> Fix For: 1.3.4
>
>
> The following link to Savepoints page
> [Savepoints](..//setup/savepoints.html)
> change to
> [Savepoints](../setup/savepoints.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7600) shorten delay of KinesisProducerConfiguration.setCredentialsRefreshDelay() to avoid updateCredentials Exception

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7600.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> shorten delay of KinesisProducerConfiguration.setCredentialsRefreshDelay() to 
> avoid updateCredentials Exception
> ---
>
> Key: FLINK-7600
> URL: https://issues.apache.org/jira/browse/FLINK-7600
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> we saw the following warning in Flink log:
> {code:java}
> 2017-08-11 02:33:24,473 WARN  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon
>   - Exception during updateCredentials
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon$5.run(Daemon.java:316)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> According to discussion in 
> https://github.com/awslabs/amazon-kinesis-producer/issues/10, setting the 
> delay to 100 will fix this issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7630) Allow passing a File or an InputStream to ParameterTool.fromPropertiesFile()

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7630.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Allow passing a File or an InputStream to ParameterTool.fromPropertiesFile()
> 
>
> Key: FLINK-7630
> URL: https://issues.apache.org/jira/browse/FLINK-7630
> Project: Flink
>  Issue Type: Improvement
>  Components: Java API
>Reporter: Wei-Che Wei
>Assignee: Wei-Che Wei
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-user-ParameterTool-fromPropertiesFile-to-get-resource-file-inside-my-jar-tp15482.html
> From this discussion, it seems that the current functionality of 
> {{ParameterTool.fromPropertiesFile}} is not enough.
> It will be more useful if {{ParameterTool.fromPropertiesFile}} can provide 
> more kinds of parameter type such as {{File}} and {{InputStream}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7670) typo in docs runtime section

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7670:


> typo in docs runtime section
> 
>
> Key: FLINK-7670
> URL: https://issues.apache.org/jira/browse/FLINK-7670
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.3.2
>Reporter: Kewei SHANG
>Priority: Minor
> Fix For: 1.3.3
>
>
> The following link to Savepoints page
> [Savepoints](..//setup/savepoints.html)
> change to
> [Savepoints](../setup/savepoints.html)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7630) Allow passing a File or an InputStream to ParameterTool.fromPropertiesFile()

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7630:


> Allow passing a File or an InputStream to ParameterTool.fromPropertiesFile()
> 
>
> Key: FLINK-7630
> URL: https://issues.apache.org/jira/browse/FLINK-7630
> Project: Flink
>  Issue Type: Improvement
>  Components: Java API
>Reporter: Wei-Che Wei
>Assignee: Wei-Che Wei
>Priority: Minor
> Fix For: 1.4.0, 1.3.4
>
>
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-user-ParameterTool-fromPropertiesFile-to-get-resource-file-inside-my-jar-tp15482.html
> From this discussion, it seems that the current functionality of 
> {{ParameterTool.fromPropertiesFile}} is not enough.
> It will be more useful if {{ParameterTool.fromPropertiesFile}} can provide 
> more kinds of parameter type such as {{File}} and {{InputStream}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7600) shorten delay of KinesisProducerConfiguration.setCredentialsRefreshDelay() to avoid updateCredentials Exception

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7600:


> shorten delay of KinesisProducerConfiguration.setCredentialsRefreshDelay() to 
> avoid updateCredentials Exception
> ---
>
> Key: FLINK-7600
> URL: https://issues.apache.org/jira/browse/FLINK-7600
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.3
>
>
> we saw the following warning in Flink log:
> {code:java}
> 2017-08-11 02:33:24,473 WARN  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon
>   - Exception during updateCredentials
> java.lang.InterruptedException: sleep interrupted
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.Daemon$5.run(Daemon.java:316)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> According to discussion in 
> https://github.com/awslabs/amazon-kinesis-producer/issues/10, setting the 
> delay to 100 will fix this issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7495) AbstractUdfStreamOperator#initializeState() should be called in AsyncWaitOperator#initializeState()

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7495.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> AbstractUdfStreamOperator#initializeState() should be called in 
> AsyncWaitOperator#initializeState()
> ---
>
> Key: FLINK-7495
> URL: https://issues.apache.org/jira/browse/FLINK-7495
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Reporter: Ted Yu
>Assignee: Fang Yong
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> {code}
> recoveredStreamElements = context
>   .getOperatorStateStore()
>   .getListState(new ListStateDescriptor<>(STATE_NAME, 
> inStreamElementSerializer));
> {code}
> Call to AbstractUdfStreamOperator#initializeState() should be added in the 
> beginning



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-8687) MaterializedCollectStreamResult#retrievePage should take resultLock

2018-03-12 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-8687.
-
   Resolution: Fixed
Fix Version/s: 1.6.0
   1.5.0

Fixed in 1.6.0: 7d837a3e884eba9937fb4b14fd9c76e8895d5703
Fixed in 1.5.0: 3a3caac9ff3b27fe9ad5b9868eba8e0ec44fdb9c

> MaterializedCollectStreamResult#retrievePage should take resultLock
> ---
>
> Key: FLINK-8687
> URL: https://issues.apache.org/jira/browse/FLINK-8687
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Major
> Fix For: 1.5.0, 1.6.0
>
>
> Currently MaterializedCollectStreamResult#retrievePage checks page range and 
> calls snapshot.subList() without holding resultLock.
> {{resultLock}} should be taken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7495) AbstractUdfStreamOperator#initializeState() should be called in AsyncWaitOperator#initializeState()

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7495:


> AbstractUdfStreamOperator#initializeState() should be called in 
> AsyncWaitOperator#initializeState()
> ---
>
> Key: FLINK-7495
> URL: https://issues.apache.org/jira/browse/FLINK-7495
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Reporter: Ted Yu
>Assignee: Fang Yong
>Priority: Minor
> Fix For: 1.4.0, 1.3.3
>
>
> {code}
> recoveredStreamElements = context
>   .getOperatorStateStore()
>   .getListState(new ListStateDescriptor<>(STATE_NAME, 
> inStreamElementSerializer));
> {code}
> Call to AbstractUdfStreamOperator#initializeState() should be added in the 
> beginning



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8687) MaterializedCollectStreamResult#retrievePage should take resultLock

2018-03-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395412#comment-16395412
 ] 

ASF GitHub Bot commented on FLINK-8687:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5647


> MaterializedCollectStreamResult#retrievePage should take resultLock
> ---
>
> Key: FLINK-8687
> URL: https://issues.apache.org/jira/browse/FLINK-8687
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Major
>
> Currently MaterializedCollectStreamResult#retrievePage checks page range and 
> calls snapshot.subList() without holding resultLock.
> {{resultLock}} should be taken.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7453) FlinkKinesisProducer logs empty aws region

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7453.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> FlinkKinesisProducer logs empty aws region
> --
>
> Key: FLINK-7453
> URL: https://issues.apache.org/jira/browse/FLINK-7453
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> I saw the following logs in my taskmanager.log
> {code:java}
> 2017-08-16 04:28:58,068 INFO  
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer  - Started 
> Kinesis producer instance for region ''
> 2017-08-16 04:28:58,708 INFO  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.KinesisProducer
>   - Extracting binaries to /tmp/amazon-kinesis-producer-native-binaries
> 2017-08-16 04:28:58,712 INFO  
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer  - Started 
> Kinesis producer instance for region ''
> 2017-08-16 04:28:59,305 INFO  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.KinesisProducer
>   - Extracting binaries to /tmp/amazon-kinesis-producer-native-binaries
> 2017-08-16 04:28:59,309 INFO  
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer  - Started 
> Kinesis producer instance for region ''
> 2017-08-16 04:28:59,898 INFO  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.KinesisProducer
>   - Extracting binaries to /tmp/amazon-kinesis-producer-native-binaries
> {code}
> I need to figure it out why first, and then propose a fix.
> cc [~tzulitai]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (FLINK-7405) Reduce spamming warning logging from DatadogHttpReporter

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-7405.

   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Reduce spamming warning logging from DatadogHttpReporter
> 
>
> Key: FLINK-7405
> URL: https://issues.apache.org/jira/browse/FLINK-7405
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> DatadogHttpReporter is logging too much when there's a connection timeout, 
> and we need to reduce the amount of logging noise.
> The excessive logging looks like:
> {code:java}
> 2017-08-07 19:30:54,408 WARN  
> org.apache.flink.metrics.datadog.DatadogHttpReporter  - Failed 
> reporting metrics to Datadog.
> java.net.SocketTimeoutException: timeout
>   at 
> org.apache.flink.shaded.okio.Okio$4.newTimeoutException(Okio.java:227)
>   at org.apache.flink.shaded.okio.AsyncTimeout.exit(AsyncTimeout.java:284)
>   at 
> org.apache.flink.shaded.okio.AsyncTimeout$2.read(AsyncTimeout.java:240)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.indexOf(RealBufferedSource.java:344)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:216)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:210)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:75)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
>   at org.apache.flink.shaded.okhttp3.RealCall.execute(RealCall.java:69)
>   at 
> org.apache.flink.metrics.datadog.DatadogHttpClient.send(DatadogHttpClient.java:85)
>   at 
> org.apache.flink.metrics.datadog.DatadogHttpReporter.report(DatadogHttpReporter.java:142)
>   at 
> org.apache.flink.runtime.metrics.MetricRegistry$ReporterTask.run(MetricRegistry.java:381)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.SocketException: Socket closed
>   at java.net.SocketInputStream.read(SocketInputStream.java:204)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 

[jira] [Closed] (FLINK-7454) update 'Monitoring Current Event Time' section of Flink doc

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7454.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> update 'Monitoring Current Event Time' section of Flink doc
> ---
>
> Key: FLINK-7454
> URL: https://issues.apache.org/jira/browse/FLINK-7454
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> Since FLINK-3427 is done, there's no need to have the following doc in 
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/debugging_event_time.html#monitoring-current-event-time
> "There are plans (see FLINK-3427) to show the current low watermark for each 
> operator in the Flink web interface.
> Until this feature is implemented the current low watermark for each task can 
> be accessed through the metrics system."
> We can replace it with something like "Low watermarks of each task can be 
> accessed either from Flink web interface or Flink metric system."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7453) FlinkKinesisProducer logs empty aws region

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7453:


> FlinkKinesisProducer logs empty aws region
> --
>
> Key: FLINK-7453
> URL: https://issues.apache.org/jira/browse/FLINK-7453
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.4
>
>
> I saw the following logs in my taskmanager.log
> {code:java}
> 2017-08-16 04:28:58,068 INFO  
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer  - Started 
> Kinesis producer instance for region ''
> 2017-08-16 04:28:58,708 INFO  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.KinesisProducer
>   - Extracting binaries to /tmp/amazon-kinesis-producer-native-binaries
> 2017-08-16 04:28:58,712 INFO  
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer  - Started 
> Kinesis producer instance for region ''
> 2017-08-16 04:28:59,305 INFO  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.KinesisProducer
>   - Extracting binaries to /tmp/amazon-kinesis-producer-native-binaries
> 2017-08-16 04:28:59,309 INFO  
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisProducer  - Started 
> Kinesis producer instance for region ''
> 2017-08-16 04:28:59,898 INFO  
> org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.KinesisProducer
>   - Extracting binaries to /tmp/amazon-kinesis-producer-native-binaries
> {code}
> I need to figure it out why first, and then propose a fix.
> cc [~tzulitai]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7454) update 'Monitoring Current Event Time' section of Flink doc

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7454:


> update 'Monitoring Current Event Time' section of Flink doc
> ---
>
> Key: FLINK-7454
> URL: https://issues.apache.org/jira/browse/FLINK-7454
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.4
>
>
> Since FLINK-3427 is done, there's no need to have the following doc in 
> https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/debugging_event_time.html#monitoring-current-event-time
> "There are plans (see FLINK-3427) to show the current low watermark for each 
> operator in the Flink web interface.
> Until this feature is implemented the current low watermark for each task can 
> be accessed through the metrics system."
> We can replace it with something like "Low watermarks of each task can be 
> accessed either from Flink web interface or Flink metric system."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #5647: [FLINK-8687] Make MaterializedCollectStreamResult#...

2018-03-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/5647


---


[jira] [Reopened] (FLINK-6549) Improve error message for type mismatches with side outputs

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-6549:


> Improve error message for type mismatches with side outputs
> ---
>
> Key: FLINK-6549
> URL: https://issues.apache.org/jira/browse/FLINK-6549
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.4
>
>
> A type mismatch when using side outputs causes a ClassCastException to be 
> thrown. It would be neat to include the name of the OutputTags in the 
> exception message.
> This can occur when multiple {{OutputTag]}s with different types but 
> identical names are being used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7405) Reduce spamming warning logging from DatadogHttpReporter

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7405:


> Reduce spamming warning logging from DatadogHttpReporter
> 
>
> Key: FLINK-7405
> URL: https://issues.apache.org/jira/browse/FLINK-7405
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.3.2
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.3
>
>
> DatadogHttpReporter is logging too much when there's a connection timeout, 
> and we need to reduce the amount of logging noise.
> The excessive logging looks like:
> {code:java}
> 2017-08-07 19:30:54,408 WARN  
> org.apache.flink.metrics.datadog.DatadogHttpReporter  - Failed 
> reporting metrics to Datadog.
> java.net.SocketTimeoutException: timeout
>   at 
> org.apache.flink.shaded.okio.Okio$4.newTimeoutException(Okio.java:227)
>   at org.apache.flink.shaded.okio.AsyncTimeout.exit(AsyncTimeout.java:284)
>   at 
> org.apache.flink.shaded.okio.AsyncTimeout$2.read(AsyncTimeout.java:240)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.indexOf(RealBufferedSource.java:344)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:216)
>   at 
> org.apache.flink.shaded.okio.RealBufferedSource.readUtf8LineStrict(RealBufferedSource.java:210)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http1.Http1Codec.readResponseHeaders(Http1Codec.java:189)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:75)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:45)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
>   at 
> org.apache.flink.shaded.okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
>   at 
> org.apache.flink.shaded.okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
>   at org.apache.flink.shaded.okhttp3.RealCall.execute(RealCall.java:69)
>   at 
> org.apache.flink.metrics.datadog.DatadogHttpClient.send(DatadogHttpClient.java:85)
>   at 
> org.apache.flink.metrics.datadog.DatadogHttpReporter.report(DatadogHttpReporter.java:142)
>   at 
> org.apache.flink.runtime.metrics.MetricRegistry$ReporterTask.run(MetricRegistry.java:381)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.SocketException: Socket closed
>   at java.net.SocketInputStream.read(SocketInputStream.java:204)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:940)
>   at 

[jira] [Reopened] (FLINK-6493) Ineffective null check in RegisteredOperatorBackendStateMetaInfo#equals()

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-6493:


> Ineffective null check in RegisteredOperatorBackendStateMetaInfo#equals()
> -
>
> Key: FLINK-6493
> URL: https://issues.apache.org/jira/browse/FLINK-6493
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
> Fix For: 1.4.0, 1.3.4
>
>
> {code}
> && ((partitionStateSerializer == null && ((Snapshot) 
> obj).getPartitionStateSerializer() == null)
>   || partitionStateSerializer.equals(((Snapshot) 
> obj).getPartitionStateSerializer()))
> && ((partitionStateSerializerConfigSnapshot == null && ((Snapshot) 
> obj).getPartitionStateSerializerConfigSnapshot() == null)
>   || partitionStateSerializerConfigSnapshot.equals(((Snapshot) 
> obj).getPartitionStateSerializerConfigSnapshot()));
> {code}
> The null check for partitionStateSerializer / 
> partitionStateSerializerConfigSnapshot is in combination with another clause.
> This may lead to NPE in the partitionStateSerializer.equals() call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-6493) Ineffective null check in RegisteredOperatorBackendStateMetaInfo#equals()

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-6493.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Ineffective null check in RegisteredOperatorBackendStateMetaInfo#equals()
> -
>
> Key: FLINK-6493
> URL: https://issues.apache.org/jira/browse/FLINK-6493
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Reporter: Ted Yu
>Assignee: mingleizhang
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> {code}
> && ((partitionStateSerializer == null && ((Snapshot) 
> obj).getPartitionStateSerializer() == null)
>   || partitionStateSerializer.equals(((Snapshot) 
> obj).getPartitionStateSerializer()))
> && ((partitionStateSerializerConfigSnapshot == null && ((Snapshot) 
> obj).getPartitionStateSerializerConfigSnapshot() == null)
>   || partitionStateSerializerConfigSnapshot.equals(((Snapshot) 
> obj).getPartitionStateSerializerConfigSnapshot()));
> {code}
> The null check for partitionStateSerializer / 
> partitionStateSerializerConfigSnapshot is in combination with another clause.
> This may lead to NPE in the partitionStateSerializer.equals() call.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-6549) Improve error message for type mismatches with side outputs

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-6549.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Improve error message for type mismatches with side outputs
> ---
>
> Key: FLINK-6549
> URL: https://issues.apache.org/jira/browse/FLINK-6549
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.3.4, 1.4.0
>
>
> A type mismatch when using side outputs causes a ClassCastException to be 
> thrown. It would be neat to include the name of the OutputTags in the 
> exception message.
> This can occur when multiple {{OutputTag]}s with different types but 
> identical names are being used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8416) Kinesis consumer doc examples should demonstrate preferred default credentials provider

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-8416:
---
Fix Version/s: (was: 1.3.3)
   1.3.4

> Kinesis consumer doc examples should demonstrate preferred default 
> credentials provider
> ---
>
> Key: FLINK-8416
> URL: https://issues.apache.org/jira/browse/FLINK-8416
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Major
> Fix For: 1.5.0, 1.4.3, 1.3.4
>
>
> The Kinesis consumer docs 
> [here](https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/connectors/kinesis.html#kinesis-consumer)
>  demonstrate providing credentials by explicitly supplying the AWS Access ID 
> and Key.
> The always preferred approach for AWS, unless running locally, is to 
> automatically fetch the shipped credentials from the AWS environment.
> That is actually the default behaviour of the Kinesis consumer, so the docs 
> should demonstrate that more clearly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7971) Fix potential NPE with inconsistent state

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7971.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Fix potential NPE with inconsistent state
> -
>
> Key: FLINK-7971
> URL: https://issues.apache.org/jira/browse/FLINK-7971
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Ruidong Li
>Assignee: Ruidong Li
>Priority: Major
> Fix For: 1.5.0, 1.3.4, 1.4.0
>
>
> In {{GroupAggProcessFunction}}, the status of  {{state}} and {{cntState}} are 
> not consistent, which may cause NPE when {{state}} is not null but 
> {{cntState}} is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-8410) Kafka consumer's commitedOffsets gauge metric is prematurely set

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-8410:
---
Fix Version/s: (was: 1.3.3)
   1.3.4

> Kafka consumer's commitedOffsets gauge metric is prematurely set
> 
>
> Key: FLINK-8410
> URL: https://issues.apache.org/jira/browse/FLINK-8410
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Metrics
>Affects Versions: 1.4.0, 1.3.2, 1.5.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
> Fix For: 1.5.0, 1.4.3, 1.3.4
>
>
> The {{committedOffset}} metric gauge value is set too early. It is set here:
> https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java#L236
> This sets the committed offset before the actual commit happens, which varies 
> depending on whether the commit mode is auto periodically, or committed on 
> checkpoints. Moreover, in the 0.9+ consumers, the {{KafkaConsumerThread}} may 
> choose to supersede some commit attempts if the commit takes longer than the 
> commit interval.
> While the committed offset back to Kafka is not a critical value used by the 
> consumer, it will be best to have more accurate values as a Flink-shipped 
> metric.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7971) Fix potential NPE with inconsistent state

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7971:


> Fix potential NPE with inconsistent state
> -
>
> Key: FLINK-7971
> URL: https://issues.apache.org/jira/browse/FLINK-7971
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Ruidong Li
>Assignee: Ruidong Li
>Priority: Major
> Fix For: 1.4.0, 1.5.0, 1.3.4
>
>
> In {{GroupAggProcessFunction}}, the status of  {{state}} and {{cntState}} are 
> not consistent, which may cause NPE when {{state}} is not null but 
> {{cntState}} is null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8783) Test instability SlotPoolRpcTest.testExtraSlotsAreKept

2018-03-12 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-8783:


Assignee: Till Rohrmann

> Test instability SlotPoolRpcTest.testExtraSlotsAreKept
> --
>
> Key: FLINK-8783
> URL: https://issues.apache.org/jira/browse/FLINK-8783
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.5.0
>Reporter: Gary Yao
>Assignee: Till Rohrmann
>Priority: Blocker
> Fix For: 1.5.0
>
>
> [https://travis-ci.org/GJL/flink/jobs/346206290]
>  {noformat}
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.784 sec <<< 
> FAILURE! - in org.apache.flink.runtime.jobmaster.slotpool.SlotPoolRpcTest
> testExtraSlotsAreKept(org.apache.flink.runtime.jobmaster.slotpool.SlotPoolRpcTest)
>  Time elapsed: 0.016 sec <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<1>
>  at org.junit.Assert.fail(Assert.java:88)
>  at org.junit.Assert.failNotEquals(Assert.java:834)
>  at org.junit.Assert.assertEquals(Assert.java:645)
>  at org.junit.Assert.assertEquals(Assert.java:631)
>  at 
> org.apache.flink.runtime.jobmaster.slotpool.SlotPoolRpcTest.testExtraSlotsAreKept(SlotPoolRpcTest.java:267)
> {noformat}
> I reproduced this in IntelliJ by configuring 50 consecutive runs of 
> {{testExtraSlotsAreKept}}. On my machine the 8th execution fails sporadically.
> commit: eeac022f0538e0979e6ad4eb06a2d1031cbd0146



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7939) DataStream of atomic type cannot be converted to Table with time attributes

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7939.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> DataStream of atomic type cannot be converted to Table with time attributes
> ---
>
> Key: FLINK-7939
> URL: https://issues.apache.org/jira/browse/FLINK-7939
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.3.3
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Major
> Fix For: 1.3.4, 1.4.0
>
>
> A DataStream of an atomic type, such as {{DataStream}} or 
> {{DataStream}} cannot be converted into a {{Table}} with a time 
> attribute.
> {code}
> DataStream stream = ...
> Table table = tEnv.fromDataStream(stream, "string, rowtime.rowtime")
> {code}
> yields
> {code}
> Exception in thread "main" org.apache.flink.table.api.TableException: Field 
> reference expression requested.
> at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$1.apply(TableEnvironment.scala:630)
> at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$1.apply(TableEnvironment.scala:624)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
> at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
> at 
> org.apache.flink.table.api.TableEnvironment.getFieldInfo(TableEnvironment.scala:624)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.registerDataStreamInternal(StreamTableEnvironment.scala:398)
> at 
> org.apache.flink.table.api.scala.StreamTableEnvironment.fromDataStream(StreamTableEnvironment.scala:85)
> {code}
> As a workaround the atomic type can be wrapped in {{Tuple1}}, i.e., convert a 
> {{DataStream}} into a {{DataStream}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7922) leastRestrictive in FlinkTypeFactory does not resolve composite type correctly

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7922.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> leastRestrictive in FlinkTypeFactory does not resolve composite type correctly
> --
>
> Key: FLINK-7922
> URL: https://issues.apache.org/jira/browse/FLINK-7922
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
> Fix For: 1.5.0, 1.3.4, 1.4.0
>
>
> FlinkTypeFactory does not override the following function correctly:
> {code:java}
> def leastRestrictive(types: util.List[RelDataType]): RelDataType = {
>   //... 
> }
> {code}
> dealing with SQL such as:
> {code:sql}
> CASE 
>   WHEN  THEN 
>  
>   ELSE 
> NULL 
> END
> {code}
> will trigger runtime exception. 
> See following test sample for more details:
> https://github.com/walterddr/flink/commit/a5f2affc9bbbd50f06200f099c90597e519e9170



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7764) FlinkKafkaProducer010 does not accept name, uid, or parallelism

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7764.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> FlinkKafkaProducer010 does not accept name, uid, or parallelism
> ---
>
> Key: FLINK-7764
> URL: https://issues.apache.org/jira/browse/FLINK-7764
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Fabian Hueske
>Assignee: Xingcan Cui
>Priority: Major
> Fix For: 1.3.4, 1.4.0
>
>
> As [reported on the user 
> list|https://lists.apache.org/thread.html/1e97b79cb5611a942beb609a61b5fba6d62a49c9c86336718a9a0004@%3Cuser.flink.apache.org%3E]:
> When I try to use KafkaProducer with timestamps it fails to set name, uid or 
> parallelism. It uses default values.
> {code}
> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration producer = 
> FlinkKafkaProducer010
> .writeToKafkaWithTimestamps(stream, topicName, schema, props, 
> partitioner);
> producer.setFlushOnCheckpoint(flushOnCheckpoint);
> producer.name("foo")
> .uid("bar")
> .setParallelism(5);
> return producer;
> {code}
> As operator name it shows "FlinKafkaProducer 0.10.x” with the typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-7932) Best Practices docs recommend passing parameters through open(Configuration c)

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-7932:
---
Fix Version/s: (was: 1.3.3)
   1.3.4

> Best Practices docs recommend passing parameters through open(Configuration c)
> --
>
> Key: FLINK-7932
> URL: https://issues.apache.org/jira/browse/FLINK-7932
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.3.2
>Reporter: Fabian Hueske
>Priority: Major
> Fix For: 1.3.4
>
>
> The [Best 
> Practices|https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/best_practices.html]
>  docs recommend to use {{Configuration}} to pass parameters to user functions.
> This does not work for DataStream programs and is not recommended anymore. 
> The "Best Practices" page should be reworked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7922) leastRestrictive in FlinkTypeFactory does not resolve composite type correctly

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7922:


> leastRestrictive in FlinkTypeFactory does not resolve composite type correctly
> --
>
> Key: FLINK-7922
> URL: https://issues.apache.org/jira/browse/FLINK-7922
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
> Fix For: 1.4.0, 1.5.0, 1.3.4
>
>
> FlinkTypeFactory does not override the following function correctly:
> {code:java}
> def leastRestrictive(types: util.List[RelDataType]): RelDataType = {
>   //... 
> }
> {code}
> dealing with SQL such as:
> {code:sql}
> CASE 
>   WHEN  THEN 
>  
>   ELSE 
> NULL 
> END
> {code}
> will trigger runtime exception. 
> See following test sample for more details:
> https://github.com/walterddr/flink/commit/a5f2affc9bbbd50f06200f099c90597e519e9170



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7939) DataStream of atomic type cannot be converted to Table with time attributes

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7939:


> DataStream of atomic type cannot be converted to Table with time attributes
> ---
>
> Key: FLINK-7939
> URL: https://issues.apache.org/jira/browse/FLINK-7939
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.3.3
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Major
> Fix For: 1.4.0, 1.3.4
>
>
> A DataStream of an atomic type, such as {{DataStream}} or 
> {{DataStream}} cannot be converted into a {{Table}} with a time 
> attribute.
> {code}
> DataStream stream = ...
> Table table = tEnv.fromDataStream(stream, "string, rowtime.rowtime")
> {code}
> yields
> {code}
> Exception in thread "main" org.apache.flink.table.api.TableException: Field 
> reference expression requested.
> at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$1.apply(TableEnvironment.scala:630)
> at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$1.apply(TableEnvironment.scala:624)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> at 
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
> at scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
> at 
> org.apache.flink.table.api.TableEnvironment.getFieldInfo(TableEnvironment.scala:624)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.registerDataStreamInternal(StreamTableEnvironment.scala:398)
> at 
> org.apache.flink.table.api.scala.StreamTableEnvironment.fromDataStream(StreamTableEnvironment.scala:85)
> {code}
> As a workaround the atomic type can be wrapped in {{Tuple1}}, i.e., convert a 
> {{DataStream}} into a {{DataStream}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7742) Fix array access might be out of bounds

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7742:


> Fix array access might be out of bounds
> ---
>
> Key: FLINK-7742
> URL: https://issues.apache.org/jira/browse/FLINK-7742
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.2
>Reporter: Hai Zhou UTC+8
>Assignee: Hai Zhou UTC+8
>Priority: Major
> Fix For: 1.4.0, 1.3.4
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7764) FlinkKafkaProducer010 does not accept name, uid, or parallelism

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7764:


> FlinkKafkaProducer010 does not accept name, uid, or parallelism
> ---
>
> Key: FLINK-7764
> URL: https://issues.apache.org/jira/browse/FLINK-7764
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Fabian Hueske
>Assignee: Xingcan Cui
>Priority: Major
> Fix For: 1.4.0, 1.3.3
>
>
> As [reported on the user 
> list|https://lists.apache.org/thread.html/1e97b79cb5611a942beb609a61b5fba6d62a49c9c86336718a9a0004@%3Cuser.flink.apache.org%3E]:
> When I try to use KafkaProducer with timestamps it fails to set name, uid or 
> parallelism. It uses default values.
> {code}
> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration producer = 
> FlinkKafkaProducer010
> .writeToKafkaWithTimestamps(stream, topicName, schema, props, 
> partitioner);
> producer.setFlushOnCheckpoint(flushOnCheckpoint);
> producer.name("foo")
> .uid("bar")
> .setParallelism(5);
> return producer;
> {code}
> As operator name it shows "FlinKafkaProducer 0.10.x” with the typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7656) Switch to user ClassLoader when invoking initializeOnMaster finalizeOnMaster

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7656.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Switch to user ClassLoader when invoking initializeOnMaster finalizeOnMaster
> 
>
> Key: FLINK-7656
> URL: https://issues.apache.org/jira/browse/FLINK-7656
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 1.3.2
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Major
> Fix For: 1.3.4, 1.4.0
>
>
> The contract that Flink provides to usercode is that that the usercode 
> classloader is the context classloader whenever usercode is called.
> In {{OutputFormatVertex}} usercode is called in the {{initializeOnMaster()}} 
> and {{finalizeOnMaster()}} methods but the context classloader is not set to 
> the usercode classloader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7659) Unprotected access to inProgress in JobCancellationWithSavepointHandlers#handleNewRequest

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7659:


> Unprotected access to inProgress in 
> JobCancellationWithSavepointHandlers#handleNewRequest
> -
>
> Key: FLINK-7659
> URL: https://issues.apache.org/jira/browse/FLINK-7659
> Project: Flink
>  Issue Type: Bug
>  Components: REST
>Reporter: Ted Yu
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.4.0, 1.3.4
>
>
> Here is related code:
> {code}
>   } finally {
> inProgress.remove(jobId);
>   }
> {code}
> A little lower, in another finally block, there is:
> {code}
>   synchronized (lock) {
> if (!success) {
>   inProgress.remove(jobId);
> {code}
> which is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-7656) Switch to user ClassLoader when invoking initializeOnMaster finalizeOnMaster

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-7656:


> Switch to user ClassLoader when invoking initializeOnMaster finalizeOnMaster
> 
>
> Key: FLINK-7656
> URL: https://issues.apache.org/jira/browse/FLINK-7656
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 1.3.2
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Major
> Fix For: 1.4.0, 1.3.4
>
>
> The contract that Flink provides to usercode is that that the usercode 
> classloader is the context classloader whenever usercode is called.
> In {{OutputFormatVertex}} usercode is called in the {{initializeOnMaster()}} 
> and {{finalizeOnMaster()}} methods but the context classloader is not set to 
> the usercode classloader.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7659) Unprotected access to inProgress in JobCancellationWithSavepointHandlers#handleNewRequest

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7659.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Unprotected access to inProgress in 
> JobCancellationWithSavepointHandlers#handleNewRequest
> -
>
> Key: FLINK-7659
> URL: https://issues.apache.org/jira/browse/FLINK-7659
> Project: Flink
>  Issue Type: Bug
>  Components: REST
>Reporter: Ted Yu
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.3.4, 1.4.0
>
>
> Here is related code:
> {code}
>   } finally {
> inProgress.remove(jobId);
>   }
> {code}
> A little lower, in another finally block, there is:
> {code}
>   synchronized (lock) {
> if (!success) {
>   inProgress.remove(jobId);
> {code}
> which is correct.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7742) Fix array access might be out of bounds

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7742.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Fix array access might be out of bounds
> ---
>
> Key: FLINK-7742
> URL: https://issues.apache.org/jira/browse/FLINK-7742
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.2
>Reporter: Hai Zhou UTC+8
>Assignee: Hai Zhou UTC+8
>Priority: Major
> Fix For: 1.3.4, 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7626) Add some metric description about checkpoints

2018-03-12 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-7626.
--
   Resolution: Fixed
Fix Version/s: (was: 1.3.3)
   1.3.4

> Add some metric description about checkpoints
> -
>
> Key: FLINK-7626
> URL: https://issues.apache.org/jira/browse/FLINK-7626
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Metrics
>Affects Versions: 1.3.2
>Reporter: Hai Zhou UTC+8
>Assignee: Hai Zhou UTC+8
>Priority: Major
> Fix For: 1.3.4, 1.4.0
>
>
> I export the metrics to the logfile via 
> Slf4jReporter(https://issues.apache.org/jira/browse/FLINK-4831), and found 
> that there are some checkpoint metrics that are not described in the 
> document, so I added.
> {noformat}
> //Number of total checkpoints (in progress, completed, failed)
> totalNumberOfCheckpoints
>  //Number of in progress checkpoints.
> numberOfInProgressCheckpoints
> //Number of successfully completed checkpoints
> numberOfCompletedCheckpoints
> //Number of failed checkpoints.
> numberOfFailedCheckpoints
> //Timestamp when the checkpoint was restored at the coordinator.
> lastCheckpointRestoreTimestamp
> //Buffered bytes during alignment over all subtasks.
> lastCheckpointAlignmentBuffered
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   >