[jira] [Commented] (FLINK-7255) ListStateDescriptor example uses wrong constructor

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099540#comment-16099540
 ] 

ASF GitHub Bot commented on FLINK-7255:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4389
  
Nice catch! +1, LGTM.


> ListStateDescriptor example uses wrong constructor
> --
>
> Key: FLINK-7255
> URL: https://issues.apache.org/jira/browse/FLINK-7255
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, State Backends, Checkpointing
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.3.2
>
>
> The {{Working with state}} docs contain an example for using a 
> ListStateDescriptor.
> In the example however a default value is passed to the constructor, which 
> however is only possible for ValueStateDescriptors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4389: [FLINK-7255] [docs] Remove default value from ListStateDe...

2017-07-24 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4389
  
Nice catch! +1, LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7256) End-to-end tests should only be run after successful compilation

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099538#comment-16099538
 ] 

ASF GitHub Bot commented on FLINK-7256:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4390
  
+1, LGTM


> End-to-end tests should only be run after successful compilation
> 
>
> Key: FLINK-7256
> URL: https://issues.apache.org/jira/browse/FLINK-7256
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests, Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> If the compilation fails (for example due to checkstyle) the end-to-end tests 
> are currently still run, even though flink-dist most likely wasn't even built.
> Similar to FLINK-7176.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4390: [FLINK-7256] [travis] Only run end-to-end tests if no pre...

2017-07-24 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4390
  
+1, LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6951) Incompatible versions of httpcomponents jars for Flink kinesis connector

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099524#comment-16099524
 ] 

ASF GitHub Bot commented on FLINK-6951:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4150
  
From the looks of https://github.com/druid-io/druid/issues/4456, could it 
be that we need to update our AWS Java SDK version?



> Incompatible versions of httpcomponents jars for Flink kinesis connector
> 
>
> Key: FLINK-6951
> URL: https://issues.apache.org/jira/browse/FLINK-6951
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> In the following thread, Bowen reported incompatible versions of 
> httpcomponents jars for Flink kinesis connector :
> http://search-hadoop.com/m/Flink/VkLeQN2m5EySpb1?subj=Re+Incompatible+Apache+Http+lib+in+Flink+kinesis+connector
> We should find a solution such that users don't have to change dependency 
> version(s) themselves when building Flink kinesis connector.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4150: [FLINK-6951] Incompatible versions of httpcomponents jars...

2017-07-24 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4150
  
From the looks of https://github.com/druid-io/druid/issues/4456, could it 
be that we need to update our AWS Java SDK version?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6951) Incompatible versions of httpcomponents jars for Flink kinesis connector

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099521#comment-16099521
 ] 

ASF GitHub Bot commented on FLINK-6951:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4150
  
@bowenli86 but isn't the shading of httpcomponents in the Kinesis consumer 
supposed to avoid conflicts with whatever version you're using for 
S3AFileSystem?


> Incompatible versions of httpcomponents jars for Flink kinesis connector
> 
>
> Key: FLINK-6951
> URL: https://issues.apache.org/jira/browse/FLINK-6951
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> In the following thread, Bowen reported incompatible versions of 
> httpcomponents jars for Flink kinesis connector :
> http://search-hadoop.com/m/Flink/VkLeQN2m5EySpb1?subj=Re+Incompatible+Apache+Http+lib+in+Flink+kinesis+connector
> We should find a solution such that users don't have to change dependency 
> version(s) themselves when building Flink kinesis connector.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4150: [FLINK-6951] Incompatible versions of httpcomponents jars...

2017-07-24 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4150
  
@bowenli86 but isn't the shading of httpcomponents in the Kinesis consumer 
supposed to avoid conflicts with whatever version you're using for 
S3AFileSystem?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6940) Clarify the effect of configuring per-job state backend

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099489#comment-16099489
 ] 

ASF GitHub Bot commented on FLINK-6940:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4136
  
@zentol @alpinegizmo  Guys, please let me know your thoughts :)


> Clarify the effect of configuring per-job state backend 
> 
>
> Key: FLINK-6940
> URL: https://issues.apache.org/jira/browse/FLINK-6940
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0
>
>
> The documentation of having different options configuring flink state backend 
> is confusing. We should add explicit doc explaining configuring a per-job 
> flink state backend in code will overwrite any default state backend 
> configured in flink-conf.yaml



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4136: [FLINK-6940][docs] Clarify the effect of configuring per-...

2017-07-24 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4136
  
@zentol @alpinegizmo  Guys, please let me know your thoughts :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6951) Incompatible versions of httpcomponents jars for Flink kinesis connector

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099488#comment-16099488
 ] 

ASF GitHub Bot commented on FLINK-6951:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4150
  
My Flink job checkpoints to S3, I'm configuring S3AFileSystem shown in 
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html#flink-for-hadoop-27.

I doubt if this error is due to S3AFileSystem uses httpcomponents 4.2, 
since httpcomponents 4.3.x [has deprecated 
SSLSocketFactory](https://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/conn/ssl/SSLSocketFactory.html)
 


> Incompatible versions of httpcomponents jars for Flink kinesis connector
> 
>
> Key: FLINK-6951
> URL: https://issues.apache.org/jira/browse/FLINK-6951
> Project: Flink
>  Issue Type: Bug
>  Components: Kinesis Connector
>Affects Versions: 1.3.0
>Reporter: Ted Yu
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> In the following thread, Bowen reported incompatible versions of 
> httpcomponents jars for Flink kinesis connector :
> http://search-hadoop.com/m/Flink/VkLeQN2m5EySpb1?subj=Re+Incompatible+Apache+Http+lib+in+Flink+kinesis+connector
> We should find a solution such that users don't have to change dependency 
> version(s) themselves when building Flink kinesis connector.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4150: [FLINK-6951] Incompatible versions of httpcomponents jars...

2017-07-24 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4150
  
My Flink job checkpoints to S3, I'm configuring S3AFileSystem shown in 
https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/aws.html#flink-for-hadoop-27.

I doubt if this error is due to S3AFileSystem uses httpcomponents 4.2, 
since httpcomponents 4.3.x [has deprecated 
SSLSocketFactory](https://hc.apache.org/httpcomponents-client-4.3.x/httpclient/apidocs/org/apache/http/conn/ssl/SSLSocketFactory.html)
 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7258) IllegalArgumentException in Netty bootstrap with large memory state segment size

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099428#comment-16099428
 ] 

ASF GitHub Bot commented on FLINK-7258:
---

Github user zhijiangW commented on the issue:

https://github.com/apache/flink/pull/4391
  
@uce , really good to see this fix. I remembered meeting this problem in 
our production before and changed the sequence then.


> IllegalArgumentException in Netty bootstrap with large memory state segment 
> size
> 
>
> Key: FLINK-7258
> URL: https://issues.apache.org/jira/browse/FLINK-7258
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.3.1
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Minor
>
> In NettyBootstrap we configure the low and high watermarks in the following 
> order:
> {code}
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 
> config.getMemorySegmentSize() + 1);
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 2 * 
> config.getMemorySegmentSize());
> {code}
> When the memory segment size is higher than the default high water mark, this 
> throws an `IllegalArgumentException` when a client tries to connect. Hence, 
> this unfortunately only happens during runtime when a intermediate result is 
> requested. This doesn't fail the job, but logs a warning and ignores the 
> failed configuration attempt, potentially resulting in degraded performance 
> because of a lower than expected watermark.
> A simple fix is to first configure the high water mark and only then 
> configure the low watermark.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4391: [FLINK-7258] [network] Fix watermark configuration order

2017-07-24 Thread zhijiangW
Github user zhijiangW commented on the issue:

https://github.com/apache/flink/pull/4391
  
@uce , really good to see this fix. I remembered meeting this problem in 
our production before and changed the sequence then.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-5486) Lack of synchronization in BucketingSink#handleRestoredBucketState()

2017-07-24 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang updated FLINK-5486:

Fix Version/s: 1.3.3
   1.3.2

> Lack of synchronization in BucketingSink#handleRestoredBucketState()
> 
>
> Key: FLINK-5486
> URL: https://issues.apache.org/jira/browse/FLINK-5486
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Ted Yu
>Assignee: mingleizhang
> Fix For: 1.3.2, 1.3.3
>
>
> Here is related code:
> {code}
>   
> handlePendingFilesForPreviousCheckpoints(bucketState.pendingFilesPerCheckpoint);
>   synchronized (bucketState.pendingFilesPerCheckpoint) {
> bucketState.pendingFilesPerCheckpoint.clear();
>   }
> {code}
> The handlePendingFilesForPreviousCheckpoints() call should be enclosed inside 
> the synchronization block. Otherwise during the processing of 
> handlePendingFilesForPreviousCheckpoints(), some entries of the map may be 
> cleared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5486) Lack of synchronization in BucketingSink#handleRestoredBucketState()

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099308#comment-16099308
 ] 

ASF GitHub Bot commented on FLINK-5486:
---

Github user tedyu commented on the issue:

https://github.com/apache/flink/pull/4356
  
lgtm


> Lack of synchronization in BucketingSink#handleRestoredBucketState()
> 
>
> Key: FLINK-5486
> URL: https://issues.apache.org/jira/browse/FLINK-5486
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Reporter: Ted Yu
>Assignee: mingleizhang
>
> Here is related code:
> {code}
>   
> handlePendingFilesForPreviousCheckpoints(bucketState.pendingFilesPerCheckpoint);
>   synchronized (bucketState.pendingFilesPerCheckpoint) {
> bucketState.pendingFilesPerCheckpoint.clear();
>   }
> {code}
> The handlePendingFilesForPreviousCheckpoints() call should be enclosed inside 
> the synchronization block. Otherwise during the processing of 
> handlePendingFilesForPreviousCheckpoints(), some entries of the map may be 
> cleared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4356: [FLINK-5486] Fix lacking of synchronization in BucketingS...

2017-07-24 Thread tedyu
Github user tedyu commented on the issue:

https://github.com/apache/flink/pull/4356
  
lgtm


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7245) Enhance the operators to support holding back watermarks

2017-07-24 Thread Xingcan Cui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16099299#comment-16099299
 ] 

Xingcan Cui commented on FLINK-7245:


Sure. I'll draw up a design document for that, as well as for dynamic timestamp 
acquisition.

> Enhance the operators to support holding back watermarks
> 
>
> Key: FLINK-7245
> URL: https://issues.apache.org/jira/browse/FLINK-7245
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API
>Reporter: Xingcan Cui
>Assignee: Xingcan Cui
>
> Currently the watermarks are applied and emitted by the 
> {{AbstractStreamOperator}} instantly. 
> {code:java}
> public void processWatermark(Watermark mark) throws Exception {
>   if (timeServiceManager != null) {
>   timeServiceManager.advanceWatermark(mark);
>   }
>   output.emitWatermark(mark);
> }
> {code}
> Some calculation results (with timestamp fields) triggered by these 
> watermarks (e.g., join or aggregate results) may be regarded as delayed by 
> the downstream operators since their timestamps must be less than or equal to 
> the corresponding triggers. 
> This issue aims to add another "working mode", which supports holding back 
> watermarks, to current operators. These watermarks should be blocked and 
> stored by the operators until all the corresponding new generated results are 
> emitted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7258) IllegalArgumentException in Netty bootstrap with large memory state segment size

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098781#comment-16098781
 ] 

ASF GitHub Bot commented on FLINK-7258:
---

GitHub user uce opened a pull request:

https://github.com/apache/flink/pull/4391

[FLINK-7258] [network] Fix watermark configuration order

## Purpose
This PR changes the order in which low and high watermarks are configured 
for Netty server child connections (high first). That way we avoid running into 
an `IllegalArgumentException` when the low watermark is larger than the high 
watermark (relevant if the configured memory segment size is larger than the 
default).

This situation surfaced only as a logged warning and the low watermark 
configuration was ignored.

## Changelog
- Configure high watermark before low watermark in `NettyServer`
- Configure high watermark before low watermark in `KvStateServer`

## Verifying this change
- The change is pretty trivial with an extended 
`NettyServerLowAndHighWatermarkTest` that now checks the expected watermarks.
- I didn't add a test for `KvStateServer`, because the watermarks can't be 
configured there manually.
- To verify, you can run `NettyServerLowAndHighWatermarkTest` with logging 
before and after this change and verify that no warning is logged anymore.

## Does this pull request potentially affect one of the following parts?
- Dependencies (does it add or upgrade a dependency): **no**
- The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
- The serializers: **no**
- The runtime per-record code paths (performance sensitive): **no**
- Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**

## Documentation
- Does this pull request introduce a new feature? **no**
- If yes, how is the feature documented? **not applicable**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/uce/flink 7258-watermark_config

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4391


commit 73998ba1328d4bf61ee979ed327b0a684ed03aa7
Author: Ufuk Celebi 
Date:   2017-07-24T16:47:23Z

[FLINK-7258] [network] Fix watermark configuration order

When configuring larger memory segment sizes, configuring the
low watermark before the high watermark may lead to an
IllegalArgumentException, because the low watermark will
temporarily be higher than the high watermark. It's necessary
to configure the high watermark before the low watermark.

For the queryable state server in KvStateServer I didn't
add an extra test as the watermarks cannot be configured there.




> IllegalArgumentException in Netty bootstrap with large memory state segment 
> size
> 
>
> Key: FLINK-7258
> URL: https://issues.apache.org/jira/browse/FLINK-7258
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.3.1
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Minor
>
> In NettyBootstrap we configure the low and high watermarks in the following 
> order:
> {code}
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 
> config.getMemorySegmentSize() + 1);
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 2 * 
> config.getMemorySegmentSize());
> {code}
> When the memory segment size is higher than the default high water mark, this 
> throws an `IllegalArgumentException` when a client tries to connect. Hence, 
> this unfortunately only happens during runtime when a intermediate result is 
> requested. This doesn't fail the job, but logs a warning and ignores the 
> failed configuration attempt, potentially resulting in degraded performance 
> because of a lower than expected watermark.
> A simple fix is to first configure the high water mark and only then 
> configure the low watermark.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4391: [FLINK-7258] [network] Fix watermark configuration...

2017-07-24 Thread uce
GitHub user uce opened a pull request:

https://github.com/apache/flink/pull/4391

[FLINK-7258] [network] Fix watermark configuration order

## Purpose
This PR changes the order in which low and high watermarks are configured 
for Netty server child connections (high first). That way we avoid running into 
an `IllegalArgumentException` when the low watermark is larger than the high 
watermark (relevant if the configured memory segment size is larger than the 
default).

This situation surfaced only as a logged warning and the low watermark 
configuration was ignored.

## Changelog
- Configure high watermark before low watermark in `NettyServer`
- Configure high watermark before low watermark in `KvStateServer`

## Verifying this change
- The change is pretty trivial with an extended 
`NettyServerLowAndHighWatermarkTest` that now checks the expected watermarks.
- I didn't add a test for `KvStateServer`, because the watermarks can't be 
configured there manually.
- To verify, you can run `NettyServerLowAndHighWatermarkTest` with logging 
before and after this change and verify that no warning is logged anymore.

## Does this pull request potentially affect one of the following parts?
- Dependencies (does it add or upgrade a dependency): **no**
- The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
- The serializers: **no**
- The runtime per-record code paths (performance sensitive): **no**
- Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**

## Documentation
- Does this pull request introduce a new feature? **no**
- If yes, how is the feature documented? **not applicable**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/uce/flink 7258-watermark_config

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4391


commit 73998ba1328d4bf61ee979ed327b0a684ed03aa7
Author: Ufuk Celebi 
Date:   2017-07-24T16:47:23Z

[FLINK-7258] [network] Fix watermark configuration order

When configuring larger memory segment sizes, configuring the
low watermark before the high watermark may lead to an
IllegalArgumentException, because the low watermark will
temporarily be higher than the high watermark. It's necessary
to configure the high watermark before the low watermark.

For the queryable state server in KvStateServer I didn't
add an extra test as the watermarks cannot be configured there.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4368: [FLINK-7210] Introduce TwoPhaseCommitSinkFunction

2017-07-24 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/4368
  
@pnowojski looks great.   You mentioned the Pravaga connector as a 
motivation, did you look at its implementation  and do you anticipate any 
challenges in porting it o this new framework?


https://github.com/pravega/flink-connectors/blob/master/src/main/java/io/pravega/connectors/flink/FlinkExactlyOncePravegaWriter.java



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-7258) IllegalArgumentException in Netty bootstrap with large memory state segment size

2017-07-24 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi updated FLINK-7258:
---
Priority: Minor  (was: Major)

> IllegalArgumentException in Netty bootstrap with large memory state segment 
> size
> 
>
> Key: FLINK-7258
> URL: https://issues.apache.org/jira/browse/FLINK-7258
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.3.1
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Minor
>
> In NettyBootstrap we configure the low and high watermarks in the following 
> order:
> {code}
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 
> config.getMemorySegmentSize() + 1);
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 2 * 
> config.getMemorySegmentSize());
> {code}
> When the memory segment size is higher than the default high water mark, this 
> throws an `IllegalArgumentException` when a client tries to connect. Hence, 
> this unfortunately only happens during runtime when a intermediate result is 
> requested. This doesn't fail the job, but logs a warning and ignores the 
> failed configuration attempt, potentially resulting in degraded performance 
> because of a lower than expected watermark.
> A simple fix is to first configure the high water mark and only then 
> configure the low watermark.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7258) IllegalArgumentException in Netty bootstrap with large memory state segment size

2017-07-24 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi updated FLINK-7258:
---
Description: 
In NettyBootstrap we configure the low and high watermarks in the following 
order:
{code}
bootstrap.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 
config.getMemorySegmentSize() + 1);
bootstrap.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 2 * 
config.getMemorySegmentSize());
{code}

When the memory segment size is higher than the default high water mark, this 
throws an `IllegalArgumentException` when a client tries to connect. Hence, 
this unfortunately only happens during runtime when a intermediate result is 
requested. This doesn't fail the job, but logs a warning and ignores the failed 
configuration attempt, potentially resulting in degraded performance because of 
a lower than expected watermark.

A simple fix is to first configure the high water mark and only then configure 
the low watermark.


  was:
In NettyBootstrap we configure the low and high watermarks in the following 
order:
{code}
bootstrap.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 
config.getMemorySegmentSize() + 1);
bootstrap.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 2 * 
config.getMemorySegmentSize());
{code}

When the memory segment size is higher than the default high water mark, this 
throws an `IllegalArgumentException` when a client tries to connect. Hence, 
this unfortunately only fails during runtime when a intermediate result is 
requested.

A simple fix is to first configure the high water mark and only then configure 
the low watermark.



> IllegalArgumentException in Netty bootstrap with large memory state segment 
> size
> 
>
> Key: FLINK-7258
> URL: https://issues.apache.org/jira/browse/FLINK-7258
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.3.1
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>
> In NettyBootstrap we configure the low and high watermarks in the following 
> order:
> {code}
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 
> config.getMemorySegmentSize() + 1);
> bootstrap.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 2 * 
> config.getMemorySegmentSize());
> {code}
> When the memory segment size is higher than the default high water mark, this 
> throws an `IllegalArgumentException` when a client tries to connect. Hence, 
> this unfortunately only happens during runtime when a intermediate result is 
> requested. This doesn't fail the job, but logs a warning and ignores the 
> failed configuration attempt, potentially resulting in degraded performance 
> because of a lower than expected watermark.
> A simple fix is to first configure the high water mark and only then 
> configure the low watermark.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7258) IllegalArgumentException in Netty bootstrap with large memory state segment size

2017-07-24 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-7258:
--

 Summary: IllegalArgumentException in Netty bootstrap with large 
memory state segment size
 Key: FLINK-7258
 URL: https://issues.apache.org/jira/browse/FLINK-7258
 Project: Flink
  Issue Type: Bug
  Components: Network
Affects Versions: 1.3.1
Reporter: Ufuk Celebi
Assignee: Ufuk Celebi


In NettyBootstrap we configure the low and high watermarks in the following 
order:
{code}
bootstrap.childOption(ChannelOption.WRITE_BUFFER_LOW_WATER_MARK, 
config.getMemorySegmentSize() + 1);
bootstrap.childOption(ChannelOption.WRITE_BUFFER_HIGH_WATER_MARK, 2 * 
config.getMemorySegmentSize());
{code}

When the memory segment size is higher than the default high water mark, this 
throws an `IllegalArgumentException` when a client tries to connect. Hence, 
this unfortunately only fails during runtime when a intermediate result is 
requested.

A simple fix is to first configure the high water mark and only then configure 
the low watermark.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4384: [FLINK-7202] Split supressions for flink-core, flink-java...

2017-07-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4384
  
Looks good to me, +1 once travis gives green light.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098492#comment-16098492
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4384
  
Looks good to me, +1 once travis gives green light.


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4383: [hotfix] [optimizer] Normalize job plan operator formatti...

2017-07-24 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4383
  
@zentol I'd estimate that to be a similar change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098464#comment-16098464
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129044374
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

For clarity purposes I would also suggest to group main/test suppressions 
by package, instead of first having all main suppressions and then all test 
suppressions.


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129044374
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

For clarity purposes I would also suggest to group main/test suppressions 
by package, instead of first having all main suppressions and then all test 
suppressions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7257) Extend flink-runtime checkstyle coverage to tests

2017-07-24 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7257:
---

 Summary: Extend flink-runtime checkstyle coverage to tests
 Key: FLINK-7257
 URL: https://issues.apache.org/jira/browse/FLINK-7257
 Project: Flink
  Issue Type: Improvement
  Components: Checkstyle
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Minor
 Fix For: 1.4.0


Checkstyle is currently completely skipped for the test files in flink-runtime, 
which is not what i intended.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098452#comment-16098452
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129042176
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

HMMlet's keep it as it is, but add a comment why it looks that way.

FYI; i just noticed that we're also excluding test files for flink-runtime, 
which is why this issue didn't pop up earlier...


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129042176
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

HMMlet's keep it as it is, but add a comment why it looks that way.

FYI; i just noticed that we're also excluding test files for flink-runtime, 
which is why this issue didn't pop up earlier...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7249) Bump Java version in build plugin

2017-07-24 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098425#comment-16098425
 ] 

Stephan Ewen commented on FLINK-7249:
-

I think so, yes. It should be referenced in a few more places, like the enforce 
plugin or so...

> Bump Java version in build plugin
> -
>
> Key: FLINK-7249
> URL: https://issues.apache.org/jira/browse/FLINK-7249
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098414#comment-16098414
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129036125
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

No they are not. The rules with regex 
`files="(.*)optimizer[/\\]operators[/\\](.*)"` is also applied. 

I divided it though, so not to apply e.g. `AvoidStarImport` to `src` as in 
the old checkstyle("pre-strict") it was already prohibited (but not applied to 
test sources). 

Do you think it would be better to split the `test` and `src` completely by 
the `files` regex and apply complete set of suppressions?


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread dawidwys
Github user dawidwys commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129036125
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

No they are not. The rules with regex 
`files="(.*)optimizer[/\\]operators[/\\](.*)"` is also applied. 

I divided it though, so not to apply e.g. `AvoidStarImport` to `src` as in 
the old checkstyle("pre-strict") it was already prohibited (but not applied to 
test sources). 

Do you think it would be better to split the `test` and `src` completely by 
the `files` regex and apply complete set of suppressions?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4383: [hotfix] [optimizer] Normalize job plan operator formatti...

2017-07-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4383
  
Just to mention the idea, would removing the space introduce more changes?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098405#comment-16098405
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129034227
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

Are these the only required suppressions?


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129034227
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -56,4 +56,32 @@ under the License.

+
+   
+   
+   
+   
+   
+   
+   
--- End diff --

Are these the only required suppressions?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7211) Exclude Gelly javadoc jar from release

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098401#comment-16098401
 ] 

ASF GitHub Bot commented on FLINK-7211:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4352
  
@aljoscha done. Thanks for the merge and managing the release!


> Exclude Gelly javadoc jar from release
> --
>
> Key: FLINK-7211
> URL: https://issues.apache.org/jira/browse/FLINK-7211
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.4.0, 1.3.2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4352: [FLINK-7211] [build] Exclude Gelly javadoc jar from relea...

2017-07-24 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4352
  
@aljoscha done. Thanks for the merge and managing the release!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7247) Replace travis java 7 profiles

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098400#comment-16098400
 ] 

ASF GitHub Bot commented on FLINK-7247:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4388
  
ah yeah, nice catch. I will move the modified profiles to openjdk8.



> Replace travis java 7 profiles
> --
>
> Key: FLINK-7247
> URL: https://issues.apache.org/jira/browse/FLINK-7247
> Project: Flink
>  Issue Type: Sub-task
>  Components: Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4388: [FLINK-7247] [travis] Replace java 7 build profiles

2017-07-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4388
  
ah yeah, nice catch. I will move the modified profiles to openjdk8.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7211) Exclude Gelly javadoc jar from release

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098398#comment-16098398
 ] 

ASF GitHub Bot commented on FLINK-7211:
---

Github user greghogan closed the pull request at:

https://github.com/apache/flink/pull/4352


> Exclude Gelly javadoc jar from release
> --
>
> Key: FLINK-7211
> URL: https://issues.apache.org/jira/browse/FLINK-7211
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Trivial
> Fix For: 1.4.0, 1.3.2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4352: [FLINK-7211] [build] Exclude Gelly javadoc jar fro...

2017-07-24 Thread greghogan
Github user greghogan closed the pull request at:

https://github.com/apache/flink/pull/4352


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7247) Replace travis java 7 profiles

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098393#comment-16098393
 ] 

ASF GitHub Bot commented on FLINK-7247:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4388
  
Since we're running `trusty` it looks like we also have `openjdk8` 
available instead of using `oraclejdk8` for all tests.


> Replace travis java 7 profiles
> --
>
> Key: FLINK-7247
> URL: https://issues.apache.org/jira/browse/FLINK-7247
> Project: Flink
>  Issue Type: Sub-task
>  Components: Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4388: [FLINK-7247] [travis] Replace java 7 build profiles

2017-07-24 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4388
  
Since we're running `trusty` it looks like we also have `openjdk8` 
available instead of using `oraclejdk8` for all tests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7256) End-to-end tests should only be run after successful compilation

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098389#comment-16098389
 ] 

ASF GitHub Bot commented on FLINK-7256:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4390

[FLINK-7256] [travis] Only run end-to-end tests if no previous error …

With this PR the execution of the end-to-end tests is skipped.

Compilation failures in particular always lead to these tests failing, 
obscuring the actual build failure cause.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7256

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4390


commit a449d14704d009096712a7f266a2feb0a3bbeaef
Author: zentol 
Date:   2017-07-24T13:11:30Z

[FLINK-7256] [travis] Only run end-to-end tests if no previous error 
occurred




> End-to-end tests should only be run after successful compilation
> 
>
> Key: FLINK-7256
> URL: https://issues.apache.org/jira/browse/FLINK-7256
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests, Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> If the compilation fails (for example due to checkstyle) the end-to-end tests 
> are currently still run, even though flink-dist most likely wasn't even built.
> Similar to FLINK-7176.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4390: [FLINK-7256] [travis] Only run end-to-end tests if...

2017-07-24 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4390

[FLINK-7256] [travis] Only run end-to-end tests if no previous error …

With this PR the execution of the end-to-end tests is skipped.

Compilation failures in particular always lead to these tests failing, 
obscuring the actual build failure cause.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7256

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4390


commit a449d14704d009096712a7f266a2feb0a3bbeaef
Author: zentol 
Date:   2017-07-24T13:11:30Z

[FLINK-7256] [travis] Only run end-to-end tests if no previous error 
occurred




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7256) End-to-end tests should only be run after successful compilation

2017-07-24 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7256:
---

 Summary: End-to-end tests should only be run after successful 
compilation
 Key: FLINK-7256
 URL: https://issues.apache.org/jira/browse/FLINK-7256
 Project: Flink
  Issue Type: Improvement
  Components: Tests, Travis
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


If the compilation fails (for example due to checkstyle) the end-to-end tests 
are currently still run, even though flink-dist most likely wasn't even built.

Similar to FLINK-7176.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7255) ListStateDescriptor example uses wrong constructor

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098332#comment-16098332
 ] 

ASF GitHub Bot commented on FLINK-7255:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4389

[FLINK-7255] [docs] Remove default value from ListStateDescriptor con…

This PR corrects the docs regarding the ListStateDescriptor; it no longer 
passes a default value to the constructor.

This applies to 1.3 and master.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7255

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4389


commit ec4dd10fb2decb86345c4d646cb9c6eeb222cb57
Author: zentol 
Date:   2017-07-24T13:04:49Z

[FLINK-7255] [docs] Remove default value from ListStateDescriptor 
constructor




> ListStateDescriptor example uses wrong constructor
> --
>
> Key: FLINK-7255
> URL: https://issues.apache.org/jira/browse/FLINK-7255
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, State Backends, Checkpointing
>Affects Versions: 1.3.1, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.3.2
>
>
> The {{Working with state}} docs contain an example for using a 
> ListStateDescriptor.
> In the example however a default value is passed to the constructor, which 
> however is only possible for ValueStateDescriptors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4389: [FLINK-7255] [docs] Remove default value from List...

2017-07-24 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4389

[FLINK-7255] [docs] Remove default value from ListStateDescriptor con…

This PR corrects the docs regarding the ListStateDescriptor; it no longer 
passes a default value to the constructor.

This applies to 1.3 and master.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7255

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4389


commit ec4dd10fb2decb86345c4d646cb9c6eeb222cb57
Author: zentol 
Date:   2017-07-24T13:04:49Z

[FLINK-7255] [docs] Remove default value from ListStateDescriptor 
constructor




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7255) ListStateDescriptor example uses wrong constructor

2017-07-24 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7255:
---

 Summary: ListStateDescriptor example uses wrong constructor
 Key: FLINK-7255
 URL: https://issues.apache.org/jira/browse/FLINK-7255
 Project: Flink
  Issue Type: Bug
  Components: Documentation, State Backends, Checkpointing
Affects Versions: 1.3.1, 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0, 1.3.2


The {{Working with state}} docs contain an example for using a 
ListStateDescriptor.
In the example however a default value is passed to the constructor, which 
however is only possible for ValueStateDescriptors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4369: [Flink-7218] [JobManager] ExecutionVertex.getPreferredLoc...

2017-07-24 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/4369
  
Hi @summerleafs!

I think we cannot use this approach to fix this. The most important thing 
is that this introduces a blocking operation (`.get()` on the future) in the 
call, which will make the while `schedueEager()` call block. Since the 
ExecutionGraph runs in an actor-style context, methods must never block. 
Everything must be implemented with future completion functions. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7187) Activate checkstyle flink-java/sca

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098300#comment-16098300
 ] 

ASF GitHub Bot commented on FLINK-7187:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4337
  
+1 to merge.


> Activate checkstyle flink-java/sca
> --
>
> Key: FLINK-7187
> URL: https://issues.apache.org/jira/browse/FLINK-7187
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7181) Activate checkstyle flink-java/operators/*

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098301#comment-16098301
 ] 

ASF GitHub Bot commented on FLINK-7181:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4342
  
+1 to merge.


> Activate checkstyle flink-java/operators/*
> --
>
> Key: FLINK-7181
> URL: https://issues.apache.org/jira/browse/FLINK-7181
> Project: Flink
>  Issue Type: Improvement
>  Components: Java API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4337: [FLINK-7187] Activate checkstyle flink-java/sca

2017-07-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4337
  
+1 to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4342: [FLINK-7181] Activate checkstyle flink-java/operators/*

2017-07-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4342
  
+1 to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-7177) DataSetAggregateWithNullValuesRule fails creating null literal for non-nullable type

2017-07-24 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-7177.
-
   Resolution: Fixed
 Assignee: Timo Walther
Fix Version/s: 1.3.2
   1.4.0

Fixed as part of FLINK-7137.

> DataSetAggregateWithNullValuesRule fails creating null literal for 
> non-nullable type
> 
>
> Key: FLINK-7177
> URL: https://issues.apache.org/jira/browse/FLINK-7177
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.1
>Reporter: Rong Rong
>Assignee: Timo Walther
> Fix For: 1.4.0, 1.3.2
>
>
> For example:
> {code:title=flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/batch/sql/AggregationsITCase.scala|borderStyle=solid}
> @Test
>   def testTableAggregationWithMultipleTableAPI(): Unit = {
> val env = ExecutionEnvironment.getExecutionEnvironment
> val tEnv = TableEnvironment.getTableEnvironment(env, config)
> val inputTable = 
> CollectionDataSets.getSmallNestedTupleDataSet(env).toTable(tEnv, 'a, 'b)
> tEnv.registerDataSet("MyTable", inputTable)
> val result = tEnv.scan("MyTable")
>   .where('a.get("_1") > 0)
>   .select('a.get("_1").avg, 'a.get("_2").sum, 'b.count)
> val expected = "2,6,3"
> val results = result.toDataSet[Row].collect()
> TestBaseUtils.compareResultAsText(results.asJava, expected)
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-7137) Flink table API defaults top level fields as nullable and all nested fields within CompositeType as non-nullable

2017-07-24 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-7137.
-
   Resolution: Fixed
Fix Version/s: 1.3.2
   1.4.0

Fixed in 1.4.0: 2d1e08a02d84f8d7cb2734e09741eae72bf63b7d, 
7aa115658b23c19fbcc8e3d1d83113608ebd7ce7, 
c0bad3b80d6fe67e43bc1a5d3bebbd98479e3d76

Fixed in 1.3.2: be8ca8a384604a2fb2bd74886f452e4a61ce9cfb, 
8c87c44692bc27fb8018adf587715a9488947799

> Flink table API defaults top level fields as nullable and all nested fields 
> within CompositeType as non-nullable
> 
>
> Key: FLINK-7137
> URL: https://issues.apache.org/jira/browse/FLINK-7137
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.1
>Reporter: Rong Rong
>Assignee: Rong Rong
> Fix For: 1.4.0, 1.3.2
>
>
> Right now FlinkTypeFactory does conversion between Flink TypeInformation to 
> Calcite RelDataType by assuming the following: 
> All top level fields will be set to nullable and all nested fields within 
> CompositeRelDataType and GenericRelDataType will be set to Calcite default 
> (which is non-nullable). 
> This triggers Calcite SQL optimization engine drop all `IS NOT NULL` clause 
> on nested fields, and would not be able to optimize when top level fields 
> were actually non-nullable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5789) Make Bucketing Sink independent of Hadoop's FileSysten

2017-07-24 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098292#comment-16098292
 ] 

mingleizhang commented on FLINK-5789:
-

I would suggest we should have a more detailed documentation for designing this 
kind of API. like {{truncate}} functionality, or when I first see {{truncate}}, 
I dont know what a truncate is and where I can study from. So, I dont know what 
to think.

FYI, I put a link about how HDFS do it. Under HDFS-3107.

[https://issues.apache.org/jira/secure/attachment/12697141/HDFS_truncate.pdf]

Peace
Minglei

> Make Bucketing Sink independent of Hadoop's FileSysten
> --
>
> Key: FLINK-5789
> URL: https://issues.apache.org/jira/browse/FLINK-5789
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming Connectors
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Stephan Ewen
> Fix For: 1.4.0
>
>
> The {{BucketingSink}} is hard wired to Hadoop's FileSystem, bypassing Flink's 
> file system abstraction.
> This causes several issues:
>   - The bucketing sink will behave different than other file sinks with 
> respect to configuration
>   - Directly supported file systems (not through hadoop) like the MapR File 
> System does not work in the same way with the BuketingSink as other file 
> systems
>   - The previous point is all the more problematic in the effort to make 
> Hadoop an optional dependency and with in other stacks (Mesos, Kubernetes, 
> AWS, GCE, Azure) with ideally no Hadoop dependency.
> We should port the {{BucketingSink}} to use Flink's FileSystem classes.
> To support the *truncate* functionality that is needed for the exactly-once 
> semantics of the Bucketing Sink, we should extend Flink's FileSystem 
> abstraction to have the methods
>   - {{boolean supportsTruncate()}}
>   - {{void truncate(Path, long)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7213) Introduce state management by OperatorID in TaskManager

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098285#comment-16098285
 ] 

ASF GitHub Bot commented on FLINK-7213:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r129020863
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/OperatorSubtaskState.java
 ---
@@ -18,20 +18,40 @@
 
 package org.apache.flink.runtime.checkpoint;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.runtime.state.CompositeStateHandle;
 import org.apache.flink.runtime.state.KeyedStateHandle;
 import org.apache.flink.runtime.state.OperatorStateHandle;
 import org.apache.flink.runtime.state.SharedStateRegistry;
 import org.apache.flink.runtime.state.StateObject;
 import org.apache.flink.runtime.state.StateUtil;
 import org.apache.flink.runtime.state.StreamStateHandle;
+import org.apache.flink.util.Preconditions;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.util.Arrays;
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
 
 /**
- * Container for the state of one parallel subtask of an operator. This is 
part of the {@link OperatorState}.
+ * This class encapsulates the state for one parallel instance of an 
operator. The complete state of a (logical)
+ * operator (e.g. a flatmap operator) consists of the union of all {@link 
OperatorSubtaskState}s from all
+ * parallel tasks that physically execute parallelized, physical instances 
of the operator.
+ * The full state of the logical operator is represented by {@link 
OperatorState} which consists of
+ * {@link OperatorSubtaskState}s.
+ * Typically, we expect all collections in this class to be of size 0 
or 1, because there up to one state handle
+ * produced per state type (e.g. managed-keyed, raw-operator, ...). In 
particular, this holds when taking a snapshot.
+ * The purpose of having the state handles in collections is that this 
class is also reused in restoring state.
+ * Under normal circumstances, the expected size of each collection is 
still 0 or 1, except for scale-down. In
--- End diff --

How come we don't need this in the current master, where this class is also 
used for restoring state?


> Introduce state management by OperatorID in TaskManager
> ---
>
> Key: FLINK-7213
> URL: https://issues.apache.org/jira/browse/FLINK-7213
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> Flink-5892 introduced the job manager / checkpoint coordinator part of 
> managing state on the operator level instead of the task level by introducing 
> explicit operator_id -> state mappings. However, this explicit mapping was 
> not introduced in the task manager side, so the explicit mapping is still 
> converted into a mapping that suits the implicit operator chain order.
> We should also introduce explicit operator ids to state management on the 
> task manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7213) Introduce state management by OperatorID in TaskManager

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098286#comment-16098286
 ] 

ASF GitHub Bot commented on FLINK-7213:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r129019373
  
--- Diff: 
flink-contrib/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBAsyncSnapshotTest.java
 ---
@@ -164,8 +168,16 @@ public void acknowledgeCheckpoint(
throw new RuntimeException(e);
}
 
+   boolean hasKeyedManagedKeyedState = false;
--- End diff --

-> `hasManagedKeyedState`?


> Introduce state management by OperatorID in TaskManager
> ---
>
> Key: FLINK-7213
> URL: https://issues.apache.org/jira/browse/FLINK-7213
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> Flink-5892 introduced the job manager / checkpoint coordinator part of 
> managing state on the operator level instead of the task level by introducing 
> explicit operator_id -> state mappings. However, this explicit mapping was 
> not introduced in the task manager side, so the explicit mapping is still 
> converted into a mapping that suits the implicit operator chain order.
> We should also introduce explicit operator ids to state management on the 
> task manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7213) Introduce state management by OperatorID in TaskManager

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098284#comment-16098284
 ] 

ASF GitHub Bot commented on FLINK-7213:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r129020085
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/OperatorSubtaskState.java
 ---
@@ -164,6 +269,7 @@ public long getStateSize() {
 
// 

 
+
--- End diff --

remove this empty line


> Introduce state management by OperatorID in TaskManager
> ---
>
> Key: FLINK-7213
> URL: https://issues.apache.org/jira/browse/FLINK-7213
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>
> Flink-5892 introduced the job manager / checkpoint coordinator part of 
> managing state on the operator level instead of the task level by introducing 
> explicit operator_id -> state mappings. However, this explicit mapping was 
> not introduced in the task manager side, so the explicit mapping is still 
> converted into a mapping that suits the implicit operator chain order.
> We should also introduce explicit operator ids to state management on the 
> task manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4353: [FLINK-7213] Introduce state management by Operato...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r129019373
  
--- Diff: 
flink-contrib/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBAsyncSnapshotTest.java
 ---
@@ -164,8 +168,16 @@ public void acknowledgeCheckpoint(
throw new RuntimeException(e);
}
 
+   boolean hasKeyedManagedKeyedState = false;
--- End diff --

-> `hasManagedKeyedState`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4353: [FLINK-7213] Introduce state management by Operato...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r129020085
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/OperatorSubtaskState.java
 ---
@@ -164,6 +269,7 @@ public long getStateSize() {
 
// 

 
+
--- End diff --

remove this empty line


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4353: [FLINK-7213] Introduce state management by Operato...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4353#discussion_r129020863
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/OperatorSubtaskState.java
 ---
@@ -18,20 +18,40 @@
 
 package org.apache.flink.runtime.checkpoint;
 
+import org.apache.flink.annotation.VisibleForTesting;
 import org.apache.flink.runtime.state.CompositeStateHandle;
 import org.apache.flink.runtime.state.KeyedStateHandle;
 import org.apache.flink.runtime.state.OperatorStateHandle;
 import org.apache.flink.runtime.state.SharedStateRegistry;
 import org.apache.flink.runtime.state.StateObject;
 import org.apache.flink.runtime.state.StateUtil;
 import org.apache.flink.runtime.state.StreamStateHandle;
+import org.apache.flink.util.Preconditions;
+
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.util.Arrays;
+import javax.annotation.Nonnull;
+import javax.annotation.Nullable;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
 
 /**
- * Container for the state of one parallel subtask of an operator. This is 
part of the {@link OperatorState}.
+ * This class encapsulates the state for one parallel instance of an 
operator. The complete state of a (logical)
+ * operator (e.g. a flatmap operator) consists of the union of all {@link 
OperatorSubtaskState}s from all
+ * parallel tasks that physically execute parallelized, physical instances 
of the operator.
+ * The full state of the logical operator is represented by {@link 
OperatorState} which consists of
+ * {@link OperatorSubtaskState}s.
+ * Typically, we expect all collections in this class to be of size 0 
or 1, because there up to one state handle
+ * produced per state type (e.g. managed-keyed, raw-operator, ...). In 
particular, this holds when taking a snapshot.
+ * The purpose of having the state handles in collections is that this 
class is also reused in restoring state.
+ * Under normal circumstances, the expected size of each collection is 
still 0 or 1, except for scale-down. In
--- End diff --

How come we don't need this in the current master, where this class is also 
used for restoring state?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-3704) JobManagerHAProcessFailureBatchRecoveryITCase.testJobManagerProcessFailure unstable

2017-07-24 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-3704.
---
Resolution: Not A Problem

Seems to have been fixed as a byproduct of other fixes.

> JobManagerHAProcessFailureBatchRecoveryITCase.testJobManagerProcessFailure 
> unstable
> ---
>
> Key: FLINK-3704
> URL: https://issues.apache.org/jira/browse/FLINK-3704
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Reporter: Robert Metzger
>  Labels: test-stability
>
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/120882840/log.txt
> {code}
> testJobManagerProcessFailure[1](org.apache.flink.test.recovery.JobManagerHAProcessFailureBatchRecoveryITCase)
>   Time elapsed: 9.302 sec  <<< ERROR!
> java.io.IOException: Actor at 
> akka.tcp://flink@127.0.0.1:55591/user/jobmanager not reachable. Please make 
> sure that the actor is running and its port is reachable.
>   at 
> org.apache.flink.runtime.akka.AkkaUtils$.getActorRef(AkkaUtils.scala:384)
>   at org.apache.flink.runtime.akka.AkkaUtils.getActorRef(AkkaUtils.scala)
>   at 
> org.apache.flink.test.recovery.JobManagerHAProcessFailureBatchRecoveryITCase.testJobManagerProcessFailure(JobManagerHAProcessFailureBatchRecoveryITCase.java:290)
> Caused by: akka.actor.ActorNotFound: Actor not found for: 
> ActorSelection[Anchor(akka.tcp://flink@127.0.0.1:55591/), 
> Path(/user/jobmanager)]
>   at 
> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
>   at 
> akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
>   at 
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
>   at 
> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
>   at 
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:267)
>   at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:508)
>   at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:541)
>   at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:531)
>   at 
> akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
>   at akka.remote.EndpointWriter.postStop(Endpoint.scala:561)
>   at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
>   at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:415)
>   at 
> akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
>   at 
> akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
>   at akka.actor.ActorCell.terminate(ActorCell.scala:369)
>   at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
>   at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
>   at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:279)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7246) Big latency shown on operator.latency

2017-07-24 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-7246:

Component/s: (was: Core)
 Metrics
 DataStream API

> Big latency shown on operator.latency
> -
>
> Key: FLINK-7246
> URL: https://issues.apache.org/jira/browse/FLINK-7246
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Metrics
>Affects Versions: 1.2.1
> Environment: Local
>Reporter: yinhua.dai
>
> I was running flink 1.2.1, and I have set metrics reporter to JMX to check 
> latency of my job.
> But the result is that the latency I observerd is over 100ms even there is no 
> processing in my job.
> And then I ran the example SocketWordCount streaming job, and again I saw the 
> latency is over 100ms, I am wondering if there is something misconfiguration 
> or problems.
> I was using start-local.bat and flink run to start up the job, all with 
> default configs.
> Thank you in advance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7245) Enhance the operators to support holding back watermarks

2017-07-24 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098273#comment-16098273
 ] 

Aljoscha Krettek commented on FLINK-7245:
-

This is an interesting new feature and I've actually thought for a long time 
that we should add something like this. Thanks for caring about this!

Before diving into work, could you please produce a design document or share 
how you want to go about implementing this?

> Enhance the operators to support holding back watermarks
> 
>
> Key: FLINK-7245
> URL: https://issues.apache.org/jira/browse/FLINK-7245
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API
>Reporter: Xingcan Cui
>Assignee: Xingcan Cui
>
> Currently the watermarks are applied and emitted by the 
> {{AbstractStreamOperator}} instantly. 
> {code:java}
> public void processWatermark(Watermark mark) throws Exception {
>   if (timeServiceManager != null) {
>   timeServiceManager.advanceWatermark(mark);
>   }
>   output.emitWatermark(mark);
> }
> {code}
> Some calculation results (with timestamp fields) triggered by these 
> watermarks (e.g., join or aggregate results) may be regarded as delayed by 
> the downstream operators since their timestamps must be less than or equal to 
> the corresponding triggers. 
> This issue aims to add another "working mode", which supports holding back 
> watermarks, to current operators. These watermarks should be blocked and 
> stored by the operators until all the corresponding new generated results are 
> emitted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098264#comment-16098264
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/4384
  
Yes, unfortunately in the poms for those modules, the checkstyle was 
disabled for test sources. That is why I forgot about tests. I enabled them and 
will prepared fixed version of suppression files.


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4384: [FLINK-7202] Split supressions for flink-core, flink-java...

2017-07-24 Thread dawidwys
Github user dawidwys commented on the issue:

https://github.com/apache/flink/pull/4384
  
Yes, unfortunately in the poms for those modules, the checkstyle was 
disabled for test sources. That is why I forgot about tests. I enabled them and 
will prepared fixed version of suppression files.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4384: [FLINK-7202] Split supressions for flink-core, flink-java...

2017-07-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4384
  
hmm...but the build is still passing...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098261#comment-16098261
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4384
  
hmm...but the build is still passing...


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7137) Flink table API defaults top level fields as nullable and all nested fields within CompositeType as non-nullable

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098260#comment-16098260
 ] 

ASF GitHub Bot commented on FLINK-7137:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4314


> Flink table API defaults top level fields as nullable and all nested fields 
> within CompositeType as non-nullable
> 
>
> Key: FLINK-7137
> URL: https://issues.apache.org/jira/browse/FLINK-7137
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.1
>Reporter: Rong Rong
>Assignee: Rong Rong
>
> Right now FlinkTypeFactory does conversion between Flink TypeInformation to 
> Calcite RelDataType by assuming the following: 
> All top level fields will be set to nullable and all nested fields within 
> CompositeRelDataType and GenericRelDataType will be set to Calcite default 
> (which is non-nullable). 
> This triggers Calcite SQL optimization engine drop all `IS NOT NULL` clause 
> on nested fields, and would not be able to optimize when top level fields 
> were actually non-nullable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4314: [FLINK-7137] [table] Flink TableAPI supports neste...

2017-07-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4314


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-7254) java8 module pom disables checkstyle

2017-07-24 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7254:
---

 Summary: java8 module pom disables checkstyle
 Key: FLINK-7254
 URL: https://issues.apache.org/jira/browse/FLINK-7254
 Project: Flink
  Issue Type: Bug
  Components: Checkstyle
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.4.0


The java8 pom file contains this:
{code}

true

{code}

Thus the checkstyle is not actually enforced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098243#comment-16098243
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129015415
  
--- Diff: tools/maven/suppressions-core.xml ---
@@ -24,6 +24,63 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
 
--- End diff --

You missed the `testutils` package.


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098245#comment-16098245
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129016101
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -24,6 +24,36 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
 
--- End diff --

missing test packages:
* custompartition
* dataexchange
* java
* programs
* testfunctions


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098248#comment-16098248
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4384
  
It appears you're missing suppressions for a number of test packages.


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098244#comment-16098244
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129015817
  
--- Diff: tools/maven/suppressions-java.xml ---
@@ -24,6 +24,27 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
 
--- End diff --

i think you missed the "java.operator" and "java.tuple" packages.


> Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7202) Split supressions for flink-core, flink-java, flink-optimizer per package

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098246#comment-16098246
 ] 

ASF GitHub Bot commented on FLINK-7202:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129015090
  
--- Diff: tools/maven/suppressions-core.xml ---
@@ -24,6 +24,63 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+Split supressions for flink-core, flink-java, flink-optimizer per package
> -
>
> Key: FLINK-7202
> URL: https://issues.apache.org/jira/browse/FLINK-7202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Checkstyle
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129015817
  
--- Diff: tools/maven/suppressions-java.xml ---
@@ -24,6 +24,27 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
 
--- End diff --

i think you missed the "java.operator" and "java.tuple" packages.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4384: [FLINK-7202] Split supressions for flink-core, flink-java...

2017-07-24 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4384
  
It appears you're missing suppressions for a number of test packages.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129016101
  
--- Diff: tools/maven/suppressions-optimizer.xml ---
@@ -24,6 +24,36 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
 
--- End diff --

missing test packages:
* custompartition
* dataexchange
* java
* programs
* testfunctions


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129015090
  
--- Diff: tools/maven/suppressions-core.xml ---
@@ -24,6 +24,63 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   

[GitHub] flink pull request #4384: [FLINK-7202] Split supressions for flink-core, fli...

2017-07-24 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4384#discussion_r129015415
  
--- Diff: tools/maven/suppressions-core.xml ---
@@ -24,6 +24,63 @@ under the License.
 
 

+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
+   
 
--- End diff --

You missed the `testutils` package.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-7253) Remove all 'assume Java 8' code in tests

2017-07-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-7253:
---

Assignee: Chesnay Schepler

> Remove all 'assume Java 8' code in tests
> 
>
> Key: FLINK-7253
> URL: https://issues.apache.org/jira/browse/FLINK-7253
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7251) Merge the flink-java8 project into flink-core

2017-07-24 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098225#comment-16098225
 ] 

Chesnay Schepler commented on FLINK-7251:
-

The JIRA title doesn't seem accurate. The {{java8}} module only contains 
examples and tests.

The examples should (naturally) be merged into the flink-example submodules.
The tests should be divided across {{flink-java}} (LambdaExtractionTest), 
{{flink-cep}} (CEPLambdaTest),  {{flink-runtime}} (JarFileCreatorLambdaTest) 
and probably {{flink-tests}} (*ITCase).

> Merge the flink-java8 project into flink-core
> -
>
> Key: FLINK-7251
> URL: https://issues.apache.org/jira/browse/FLINK-7251
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7249) Bump Java version in build plugin

2017-07-24 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098219#comment-16098219
 ] 

Chesnay Schepler commented on FLINK-7249:
-

[~StephanEwen] Is this about the {{java.version}} property that the 
{{maven-compiler-plugin}} is using?

> Bump Java version in build plugin
> -
>
> Key: FLINK-7249
> URL: https://issues.apache.org/jira/browse/FLINK-7249
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7250) Drop the jdk8 build profile

2017-07-24 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-7250:
---

Assignee: Chesnay Schepler

> Drop the jdk8 build profile
> ---
>
> Key: FLINK-7250
> URL: https://issues.apache.org/jira/browse/FLINK-7250
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098200#comment-16098200
 ] 

ASF GitHub Bot commented on FLINK-7174:
---

Github user tzulitai closed the pull request at:

https://github.com/apache/flink/pull/4386


> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.2.1, 1.3.1, 1.4.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
> Fix For: 1.4.0, 1.3.2
>
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4386: (backport-1.3) [FLINK-7174] [kafka] Bump Kafka 0.1...

2017-07-24 Thread tzulitai
Github user tzulitai closed the pull request at:

https://github.com/apache/flink/pull/4386


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-24 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-7174:
---
Release Note: The default Kafka version for Flink Kafka Consumer 0.10 is 
bumped from 0.10.0.1 to 0.10.2.1.  (was: The default Kafka version for Flink 
Kafka Consumer 0.10 is bumped to 0.10.2.1.)

> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.2.1, 1.3.1, 1.4.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
> Fix For: 1.4.0, 1.3.2
>
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-24 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai resolved FLINK-7174.

  Resolution: Fixed
Release Note: The default Kafka version for Flink Kafka Consumer 0.10 is 
bumped to 0.10.2.1.

> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.2.1, 1.3.1, 1.4.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
> Fix For: 1.4.0, 1.3.2
>
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7174) Bump dependency of Kafka 0.10.x to the latest one

2017-07-24 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098197#comment-16098197
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-7174:


Merged for {{release-1.3}} via 6abd40299040ca646e7e94313dd1e0d25a4c8d82.
Closing this now, thanks a lot for the contribution [~pnowojski]!

> Bump dependency of Kafka 0.10.x to the latest one
> -
>
> Key: FLINK-7174
> URL: https://issues.apache.org/jira/browse/FLINK-7174
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.2.1, 1.3.1, 1.4.0
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
> Fix For: 1.4.0, 1.3.2
>
>
> We are using pretty old Kafka version for 0.10. Besides any bug fixes and 
> improvements that were made between 0.10.0.1 and 0.10.2.1, it 0.10.2.1 
> version is more similar to 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6365) Adapt default values of the Kinesis connector

2017-07-24 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-6365:
---
Affects Version/s: (was: 1.2.0)
   1.4.0
   1.2.1
   1.3.1

> Adapt default values of the Kinesis connector
> -
>
> Key: FLINK-6365
> URL: https://issues.apache.org/jira/browse/FLINK-6365
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.2.1, 1.3.1, 1.4.0
>Reporter: Steffen Hausmann
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.2
>
>
> As discussed in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-connector-SHARD-GETRECORDS-MAX-default-value-td12332.html,
>  it seems reasonable to change the default values of the Kinesis connector to 
> follow KCL’s default settings. I suggest to adapt at least the values for 
> SHARD_GETRECORDS_MAX and SHARD_GETRECORDS_INTERVAL_MILLIS. 
> As a Kinesis shard is currently limited to 5 get operations per second, you 
> can observe high ReadProvisionedThroughputExceeded rates with the current 
> default value for SHARD_GETRECORDS_INTERVAL_MILLIS of 0; it seem reasonable 
> to increase it to 200. As it's described in the email thread, it seems 
> furthermore desirable to increase the default value for SHARD_GETRECORDS_MAX 
> to 1.
> The values that are used by the KCL can be found here: 
> https://github.com/awslabs/amazon-kinesis-client/blob/master/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/KinesisClientLibConfiguration.java
> Thanks for looking into this!
> Steffen



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6365) Adapt default values of the Kinesis connector

2017-07-24 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-6365.
--
Resolution: Fixed

> Adapt default values of the Kinesis connector
> -
>
> Key: FLINK-6365
> URL: https://issues.apache.org/jira/browse/FLINK-6365
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.2.1, 1.3.1, 1.4.0
>Reporter: Steffen Hausmann
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.2
>
>
> As discussed in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-connector-SHARD-GETRECORDS-MAX-default-value-td12332.html,
>  it seems reasonable to change the default values of the Kinesis connector to 
> follow KCL’s default settings. I suggest to adapt at least the values for 
> SHARD_GETRECORDS_MAX and SHARD_GETRECORDS_INTERVAL_MILLIS. 
> As a Kinesis shard is currently limited to 5 get operations per second, you 
> can observe high ReadProvisionedThroughputExceeded rates with the current 
> default value for SHARD_GETRECORDS_INTERVAL_MILLIS of 0; it seem reasonable 
> to increase it to 200. As it's described in the email thread, it seems 
> furthermore desirable to increase the default value for SHARD_GETRECORDS_MAX 
> to 1.
> The values that are used by the KCL can be found here: 
> https://github.com/awslabs/amazon-kinesis-client/blob/master/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/KinesisClientLibConfiguration.java
> Thanks for looking into this!
> Steffen



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (FLINK-6365) Adapt default values of the Kinesis connector

2017-07-24 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reopened FLINK-6365:


> Adapt default values of the Kinesis connector
> -
>
> Key: FLINK-6365
> URL: https://issues.apache.org/jira/browse/FLINK-6365
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.2.0
>Reporter: Steffen Hausmann
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.2
>
>
> As discussed in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-connector-SHARD-GETRECORDS-MAX-default-value-td12332.html,
>  it seems reasonable to change the default values of the Kinesis connector to 
> follow KCL’s default settings. I suggest to adapt at least the values for 
> SHARD_GETRECORDS_MAX and SHARD_GETRECORDS_INTERVAL_MILLIS. 
> As a Kinesis shard is currently limited to 5 get operations per second, you 
> can observe high ReadProvisionedThroughputExceeded rates with the current 
> default value for SHARD_GETRECORDS_INTERVAL_MILLIS of 0; it seem reasonable 
> to increase it to 200. As it's described in the email thread, it seems 
> furthermore desirable to increase the default value for SHARD_GETRECORDS_MAX 
> to 1.
> The values that are used by the KCL can be found here: 
> https://github.com/awslabs/amazon-kinesis-client/blob/master/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/KinesisClientLibConfiguration.java
> Thanks for looking into this!
> Steffen



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6365) Adapt default values of the Kinesis connector

2017-07-24 Thread Tzu-Li (Gordon) Tai (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-6365:
---
Fix Version/s: 1.3.2

> Adapt default values of the Kinesis connector
> -
>
> Key: FLINK-6365
> URL: https://issues.apache.org/jira/browse/FLINK-6365
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Affects Versions: 1.2.0
>Reporter: Steffen Hausmann
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0, 1.3.2
>
>
> As discussed in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kinesis-connector-SHARD-GETRECORDS-MAX-default-value-td12332.html,
>  it seems reasonable to change the default values of the Kinesis connector to 
> follow KCL’s default settings. I suggest to adapt at least the values for 
> SHARD_GETRECORDS_MAX and SHARD_GETRECORDS_INTERVAL_MILLIS. 
> As a Kinesis shard is currently limited to 5 get operations per second, you 
> can observe high ReadProvisionedThroughputExceeded rates with the current 
> default value for SHARD_GETRECORDS_INTERVAL_MILLIS of 0; it seem reasonable 
> to increase it to 200. As it's described in the email thread, it seems 
> furthermore desirable to increase the default value for SHARD_GETRECORDS_MAX 
> to 1.
> The values that are used by the KCL can be found here: 
> https://github.com/awslabs/amazon-kinesis-client/blob/master/src/main/java/com/amazonaws/services/kinesis/clientlibrary/lib/worker/KinesisClientLibConfiguration.java
> Thanks for looking into this!
> Steffen



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7247) Replace travis java 7 profiles

2017-07-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16098193#comment-16098193
 ] 

ASF GitHub Bot commented on FLINK-7247:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4388

[FLINK-7247] [travis] Replace java 7 build profiles

This PR bumps the java 7 build profiles on travis to java 8.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7247

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4388.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4388


commit 0de94696777b31855a94d33723579d06135ddc4b
Author: zentol 
Date:   2017-07-24T10:34:17Z

[FLINK-7247] [travis] Replace java 7 build profiles




> Replace travis java 7 profiles
> --
>
> Key: FLINK-7247
> URL: https://issues.apache.org/jira/browse/FLINK-7247
> Project: Flink
>  Issue Type: Sub-task
>  Components: Travis
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >