[GitHub] flink pull request #3015: [FLINK-5349] Fix typos in Twitter connector code s...

2016-12-15 Thread mushketyk
GitHub user mushketyk opened a pull request:

https://github.com/apache/flink/pull/3015

[FLINK-5349] Fix typos in Twitter connector code sample

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mushketyk/flink fix-twitter-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3015


commit 3df00623dce88d5d04f76142daa69e81caceafd7
Author: Ivan Mushketyk 
Date:   2016-12-16T07:56:46Z

[FLINK-5349] Fix typos in Twitter connector code sample




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5349) Fix code sample for Twitter connector

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753754#comment-15753754
 ] 

ASF GitHub Bot commented on FLINK-5349:
---

GitHub user mushketyk opened a pull request:

https://github.com/apache/flink/pull/3015

[FLINK-5349] Fix typos in Twitter connector code sample

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [x] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mushketyk/flink fix-twitter-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3015.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3015


commit 3df00623dce88d5d04f76142daa69e81caceafd7
Author: Ivan Mushketyk 
Date:   2016-12-16T07:56:46Z

[FLINK-5349] Fix typos in Twitter connector code sample




> Fix code sample for Twitter connector
> -
>
> Key: FLINK-5349
> URL: https://issues.apache.org/jira/browse/FLINK-5349
> Project: Flink
>  Issue Type: Bug
>Reporter: Ivan Mushketyk
>Assignee: Ivan Mushketyk
>
> There is a typo in code sample for Twitter connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5349) Fix code sample for Twitter connector

2016-12-15 Thread Ivan Mushketyk (JIRA)
Ivan Mushketyk created FLINK-5349:
-

 Summary: Fix code sample for Twitter connector
 Key: FLINK-5349
 URL: https://issues.apache.org/jira/browse/FLINK-5349
 Project: Flink
  Issue Type: Bug
Reporter: Ivan Mushketyk
Assignee: Ivan Mushketyk


There is a typo in code sample for Twitter connector.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5344) docs don't build in dockerized jekyll; -p option is broken

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753700#comment-15753700
 ] 

ASF GitHub Bot commented on FLINK-5344:
---

Github user alpinegizmo closed the pull request at:

https://github.com/apache/flink/pull/3013


> docs don't build in dockerized jekyll; -p option is broken
> --
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 
> Also the changes to the build_docs script broke the -p option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3013: [FLINK-5344] relax spec for requested ruby version so the...

2016-12-15 Thread alpinegizmo
Github user alpinegizmo commented on the issue:

https://github.com/apache/flink/pull/3013
  
I've concluded this was the wrong approach. I'll open another pull request 
to deal with these issues.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5344) docs don't build in dockerized jekyll; -p option is broken

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753699#comment-15753699
 ] 

ASF GitHub Bot commented on FLINK-5344:
---

Github user alpinegizmo commented on the issue:

https://github.com/apache/flink/pull/3013
  
I've concluded this was the wrong approach. I'll open another pull request 
to deal with these issues.


> docs don't build in dockerized jekyll; -p option is broken
> --
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 
> Also the changes to the build_docs script broke the -p option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3013: [FLINK-5344] relax spec for requested ruby version...

2016-12-15 Thread alpinegizmo
Github user alpinegizmo closed the pull request at:

https://github.com/apache/flink/pull/3013


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-4255) Unstable test WebRuntimeMonitorITCase.testNoEscape

2016-12-15 Thread Boris Osipov (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Osipov reassigned FLINK-4255:
---

Assignee: Boris Osipov

> Unstable test WebRuntimeMonitorITCase.testNoEscape
> --
>
> Key: FLINK-4255
> URL: https://issues.apache.org/jira/browse/FLINK-4255
> Project: Flink
>  Issue Type: Bug
>Reporter: Kostas Kloudas
>Assignee: Boris Osipov
>  Labels: test-stability
>
> An instance of the problem can be found here:
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/146615994/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-3746) WebRuntimeMonitorITCase.testNoCopyFromJar failing intermittently

2016-12-15 Thread Boris Osipov (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boris Osipov reassigned FLINK-3746:
---

Assignee: Boris Osipov

> WebRuntimeMonitorITCase.testNoCopyFromJar failing intermittently
> 
>
> Key: FLINK-3746
> URL: https://issues.apache.org/jira/browse/FLINK-3746
> Project: Flink
>  Issue Type: Bug
>Reporter: Todd Lisonbee
>Assignee: Boris Osipov
>Priority: Minor
>  Labels: flaky-test
>
> Test failed randomly in Travis,
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/122624299/log.txt
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 13.127 sec 
> <<< FAILURE! - in org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase
> testNoCopyFromJar(org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase)
>   Time elapsed: 0.124 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<200 OK> but was:<503 Service Unavailable>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.flink.runtime.webmonitor.WebRuntimeMonitorITCase.testNoCopyFromJar(WebRuntimeMonitorITCase.java:456)
> Results :
> Failed tests: 
>   WebRuntimeMonitorITCase.testNoCopyFromJar:456 expected:<200 OK> but 
> was:<503 Service Unavailable>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5280) Extend TableSource to support nested data

2016-12-15 Thread Jark Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753315#comment-15753315
 ] 

Jark Wu commented on FLINK-5280:


Row and RowTypeInfo has been moved to flink-core. So I would suggest to do it 
in a separate issue. I created FLINK-5348 to fix it.

> Extend TableSource to support nested data
> -
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5342) Setting the parallelism automatically for operators base on cost model

2016-12-15 Thread godfrey he (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753260#comment-15753260
 ] 

godfrey he commented on FLINK-5342:
---

take your point,there is a lot of work to do if we want to achieve that goal.

> Setting the parallelism automatically for operators base on cost model
> --
>
> Key: FLINK-5342
> URL: https://issues.apache.org/jira/browse/FLINK-5342
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: godfrey he
>
> On Flink table API, a query will be translated to operators without 
> parallelism. And user do not know even do not care the target operators 
> translated from query. So it's better to set the parallelism automatically 
> for each operator base on cost model.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5324) JVM Opitons will be work both for YarnApplicationMasterRunner and YarnTaskManager with yarn mode

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753235#comment-15753235
 ] 

ASF GitHub Bot commented on FLINK-5324:
---

GitHub user hzyuemeng1 reopened a pull request:

https://github.com/apache/flink/pull/2994

[FLINK-5324] [yarn] JVM Opitons will work for both YarnApplicationMas…

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

…terRunner and YarnTaskManager with yarn mode

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hzyuemeng1/flink FLINK-5324

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2994.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2994


commit 0a11b26adfac91663023aa1c5e26e2ff60d44e15
Author: hzyuemeng1 
Date:   2016-12-13T08:13:20Z

[FLINK-5324] [yarn] JVM Opitons will work for both 
YarnApplicationMasterRunner and YarnTaskManager with yarn mode




> JVM Opitons will be work both for YarnApplicationMasterRunner and 
> YarnTaskManager with yarn mode
> 
>
> Key: FLINK-5324
> URL: https://issues.apache.org/jira/browse/FLINK-5324
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.1.3
>Reporter: yuemeng
>Priority: Critical
> Attachments: 
> 0001-FLINK-5324-yarn-JVM-Opitons-will-work-for-both-YarnA.patch
>
>
> YarnApplicationMasterRunner and YarnTaskManager both use follow code to get 
> jvm options
> {code}
> final String javaOpts = 
> flinkConfiguration.getString(ConfigConstants.FLINK_JVM_OPTIONS, "")
> {code}
> so when we add some jvm options for one of them ,it will be both worked



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2994: [FLINK-5324] [yarn] JVM Opitons will work for both...

2016-12-15 Thread hzyuemeng1
GitHub user hzyuemeng1 reopened a pull request:

https://github.com/apache/flink/pull/2994

[FLINK-5324] [yarn] JVM Opitons will work for both YarnApplicationMas…

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed

…terRunner and YarnTaskManager with yarn mode

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hzyuemeng1/flink FLINK-5324

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2994.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2994


commit 0a11b26adfac91663023aa1c5e26e2ff60d44e15
Author: hzyuemeng1 
Date:   2016-12-13T08:13:20Z

[FLINK-5324] [yarn] JVM Opitons will work for both 
YarnApplicationMasterRunner and YarnTaskManager with yarn mode




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5348) Support custom field names for RowTypeInfo

2016-12-15 Thread Jark Wu (JIRA)
Jark Wu created FLINK-5348:
--

 Summary: Support custom field names for RowTypeInfo
 Key: FLINK-5348
 URL: https://issues.apache.org/jira/browse/FLINK-5348
 Project: Flink
  Issue Type: Improvement
  Components: Core
Reporter: Jark Wu
Assignee: Jark Wu


Currently, the RowTypeInfo doesn't support optional custom field names, but 
forced to generate {{f0}} ~ {{fn}} as field names. It would be better to 
support custom names and will benefit some cases (e.g. FLINK-5280).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4821) Implement rescalable non-partitioned state for Kinesis Connector

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15753091#comment-15753091
 ] 

ASF GitHub Bot commented on FLINK-4821:
---

Github user tony810430 commented on the issue:

https://github.com/apache/flink/pull/3001
  
OK. Thanks for pointing out this problem.


> Implement rescalable non-partitioned state for Kinesis Connector
> 
>
> Key: FLINK-4821
> URL: https://issues.apache.org/jira/browse/FLINK-4821
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Wei-Che Wei
> Fix For: 1.2.0
>
>
> FLINK-4379 added the rescalable non-partitioned state feature, along with the 
> implementation for the Kafka connector.
> The AWS Kinesis connector will benefit from the feature and should implement 
> it too. This ticket tracks progress for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3001: [FLINK-4821] [kinesis] Implement rescalable non-partition...

2016-12-15 Thread tony810430
Github user tony810430 commented on the issue:

https://github.com/apache/flink/pull/3001
  
OK. Thanks for pointing out this problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5255) Improve single row check in DataSetSingleRowJoinRule

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752834#comment-15752834
 ] 

ASF GitHub Bot commented on FLINK-5255:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3009
  
Thanks for the PR @AlexanderShoshin!
It's good to merge.


> Improve single row check in DataSetSingleRowJoinRule
> 
>
> Key: FLINK-5255
> URL: https://issues.apache.org/jira/browse/FLINK-5255
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Alexander Shoshin
>
> {{DataSetSingleRowJoinRule}} checks converts an arbitrary inner join (cross, 
> theta, equi) where one input has exactly one row into a broadcast-map join.
> Currently, the condition to check for the single row is that the input of the 
> join must be a global aggregation. The check fails if the input is a 
> {{LogicalCalc}} followed by {{LogicalAggregate}}.
> Hence, the following query cannot be executed:
> {code}
> SELECT absum, x.a
> FROM x, (SELECT a.sum + b.sum AS absum FROM y)
> {code}
> The single row check should be extended to accept a {{LogicalCalc}} that has 
> no condition {{(RexProgram.getCondition() == null)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3009: [FLINK-5255] Improve single row check in DataSetSingleRow...

2016-12-15 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3009
  
Thanks for the PR @AlexanderShoshin!
It's good to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-5347) Unclosed stream in OperatorBackendSerializationProxy#write()

2016-12-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved FLINK-5347.
---
Resolution: Not A Problem

> Unclosed stream in OperatorBackendSerializationProxy#write()
> 
>
> Key: FLINK-5347
> URL: https://issues.apache.org/jira/browse/FLINK-5347
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> public void write(DataOutputView out) throws IOException {
>   out.writeUTF(getName());
>   DataOutputViewStream dos = new DataOutputViewStream(out);
>   InstantiationUtil.serializeObject(dos, getStateSerializer());
> }
> {code}
> dos should be closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5347) Unclosed stream in OperatorBackendSerializationProxy#write()

2016-12-15 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752765#comment-15752765
 ] 

Stefan Richter commented on FLINK-5347:
---

{{dos}} is just a wrapper around {{out}}. {{out}} does not expose a close 
method and therefore closing the wrapper has not effect. There is no difference 
between closing and not closing it.

> Unclosed stream in OperatorBackendSerializationProxy#write()
> 
>
> Key: FLINK-5347
> URL: https://issues.apache.org/jira/browse/FLINK-5347
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> public void write(DataOutputView out) throws IOException {
>   out.writeUTF(getName());
>   DataOutputViewStream dos = new DataOutputViewStream(out);
>   InstantiationUtil.serializeObject(dos, getStateSerializer());
> }
> {code}
> dos should be closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5347) Unclosed stream in OperatorBackendSerializationProxy#write()

2016-12-15 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752754#comment-15752754
 ] 

Stefan Richter commented on FLINK-5347:
---

The stream is not opened in the method and therefore should also not be closed 
here. Consecutive writes to the stream should be possible and the caller is 
responsible for closing.

> Unclosed stream in OperatorBackendSerializationProxy#write()
> 
>
> Key: FLINK-5347
> URL: https://issues.apache.org/jira/browse/FLINK-5347
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> public void write(DataOutputView out) throws IOException {
>   out.writeUTF(getName());
>   DataOutputViewStream dos = new DataOutputViewStream(out);
>   InstantiationUtil.serializeObject(dos, getStateSerializer());
> }
> {code}
> dos should be closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (FLINK-5347) Unclosed stream in OperatorBackendSerializationProxy#write()

2016-12-15 Thread Stefan Richter (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Richter updated FLINK-5347:
--
Comment: was deleted

(was: The stream is not opened in the method and therefore should also not be 
closed here. Consecutive writes to the stream should be possible and the caller 
is responsible for closing.)

> Unclosed stream in OperatorBackendSerializationProxy#write()
> 
>
> Key: FLINK-5347
> URL: https://issues.apache.org/jira/browse/FLINK-5347
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> public void write(DataOutputView out) throws IOException {
>   out.writeUTF(getName());
>   DataOutputViewStream dos = new DataOutputViewStream(out);
>   InstantiationUtil.serializeObject(dos, getStateSerializer());
> }
> {code}
> dos should be closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5343) Add more option to CsvTableSink

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752714#comment-15752714
 ] 

ASF GitHub Bot commented on FLINK-5343:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3011
  
Thanks @fhueske. Looks good to merge.


> Add more option to CsvTableSink
> ---
>
> Key: FLINK-5343
> URL: https://issues.apache.org/jira/browse/FLINK-5343
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0, 1.1.3
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
> Fix For: 1.2.0
>
>
> The {{CsvTableSink}} does currently only offer very few configuration options.
> We should add an optional parameter to configure 
> - the overwrite behavior
> - the number of files to write.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3011: [FLINK-5343] [table] Add support to overwrite files with ...

2016-12-15 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3011
  
Thanks @fhueske. Looks good to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4861) Package optional project artifacts

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752708#comment-15752708
 ] 

ASF GitHub Bot commented on FLINK-4861:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3000
  
New PR to only include metrics, cep, ml, and gelly: #3014


> Package optional project artifacts
> --
>
> Key: FLINK-4861
> URL: https://issues.apache.org/jira/browse/FLINK-4861
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Per the mailing list 
> [discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Additional-project-downloads-td13223.html],
>  package the Flink libraries and connectors into subdirectories of a new 
> {{opt}} directory in the release/snapshot tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3000: [FLINK-4861] [build] Package optional project artifacts

2016-12-15 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3000
  
New PR to only include metrics, cep, ml, and gelly: #3014


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4861) Package optional project artifacts

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752706#comment-15752706
 ] 

ASF GitHub Bot commented on FLINK-4861:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/3014

[FLINK-4861] [build] Package optional project artifacts

Package the Flink connectors, metrics, and libraries into subdirectories of 
a new opt directory in the release/snapshot tarballs.

The following artifacts are packaged by this build:

$ ls build-target/opt/
flink-cep_2.10-1.2-SNAPSHOT.jar
flink-cep-scala_2.10-1.2-SNAPSHOT.jar
flink-gelly_2.10-1.2-SNAPSHOT.jar
flink-gelly-examples_2.10-1.2-SNAPSHOT.jar
flink-gelly-scala_2.10-1.2-SNAPSHOT.jar
flink-metrics-dropwizard-1.2-SNAPSHOT.jar
flink-metrics-ganglia-1.2-SNAPSHOT.jar
flink-metrics-graphite-1.2-SNAPSHOT.jar
flink-metrics-statsd-1.2-SNAPSHOT.jar
flink-ml_2.10-1.2-SNAPSHOT.jar

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
4861c_package_optional_project_artifacts

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3014


commit 980635fb6d8d15660f97d85f6709c68999416b99
Author: Greg Hogan 
Date:   2016-12-15T20:49:07Z

[FLINK-4861] [build] Package optional project artifacts

Package the Flink connectors, metrics, and libraries into subdirectories
of a new opt directory in the release/snapshot tarballs.




> Package optional project artifacts
> --
>
> Key: FLINK-4861
> URL: https://issues.apache.org/jira/browse/FLINK-4861
> Project: Flink
>  Issue Type: New Feature
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 1.2.0
>
>
> Per the mailing list 
> [discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Additional-project-downloads-td13223.html],
>  package the Flink libraries and connectors into subdirectories of a new 
> {{opt}} directory in the release/snapshot tarballs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3014: [FLINK-4861] [build] Package optional project arti...

2016-12-15 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/3014

[FLINK-4861] [build] Package optional project artifacts

Package the Flink connectors, metrics, and libraries into subdirectories of 
a new opt directory in the release/snapshot tarballs.

The following artifacts are packaged by this build:

$ ls build-target/opt/
flink-cep_2.10-1.2-SNAPSHOT.jar
flink-cep-scala_2.10-1.2-SNAPSHOT.jar
flink-gelly_2.10-1.2-SNAPSHOT.jar
flink-gelly-examples_2.10-1.2-SNAPSHOT.jar
flink-gelly-scala_2.10-1.2-SNAPSHOT.jar
flink-metrics-dropwizard-1.2-SNAPSHOT.jar
flink-metrics-ganglia-1.2-SNAPSHOT.jar
flink-metrics-graphite-1.2-SNAPSHOT.jar
flink-metrics-statsd-1.2-SNAPSHOT.jar
flink-ml_2.10-1.2-SNAPSHOT.jar

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
4861c_package_optional_project_artifacts

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3014


commit 980635fb6d8d15660f97d85f6709c68999416b99
Author: Greg Hogan 
Date:   2016-12-15T20:49:07Z

[FLINK-4861] [build] Package optional project artifacts

Package the Flink connectors, metrics, and libraries into subdirectories
of a new opt directory in the release/snapshot tarballs.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2984: [FLINK-5311] Add user documentation for bipartite graph

2016-12-15 Thread mushketyk
Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2984
  
Hi @vasia , thank you for your review!

I've added the warning to the Bipartite Graph page.
I am a bit reluctant to rename "Graph transformations" though since I 
wanted to extend this section in other commits when we have more operations for 
BipartiteGraph transformations, e.g. map.

What do you think about this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5311) Write user documentation for BipartiteGraph

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752618#comment-15752618
 ] 

ASF GitHub Bot commented on FLINK-5311:
---

Github user mushketyk commented on the issue:

https://github.com/apache/flink/pull/2984
  
Hi @vasia , thank you for your review!

I've added the warning to the Bipartite Graph page.
I am a bit reluctant to rename "Graph transformations" though since I 
wanted to extend this section in other commits when we have more operations for 
BipartiteGraph transformations, e.g. map.

What do you think about this?


> Write user documentation for BipartiteGraph
> ---
>
> Key: FLINK-5311
> URL: https://issues.apache.org/jira/browse/FLINK-5311
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Ivan Mushketyk
>Assignee: Ivan Mushketyk
>
> We need to add user documentation. The progress on BipartiteGraph can be 
> tracked in the following JIRA:
> https://issues.apache.org/jira/browse/FLINK-2254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5344) docs don't build in dockerized jekyll; -p option is broken

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752384#comment-15752384
 ] 

ASF GitHub Bot commented on FLINK-5344:
---

Github user alpinegizmo commented on the issue:

https://github.com/apache/flink/pull/3013
  
Max, this latest commit allows one to take advantage of ruby 2.0 if it is 
available, but is backwards compatible to 1.9. Hopefully this will satisfy 
everyone. The only issue I can see is that if one runs the build_docs script 
with ruby 2.x then Gemfile.lock will be updated with newer versions which 
probably should not be checked in.


> docs don't build in dockerized jekyll; -p option is broken
> --
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 
> Also the changes to the build_docs script broke the -p option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3013: [FLINK-5344] relax spec for requested ruby version so the...

2016-12-15 Thread alpinegizmo
Github user alpinegizmo commented on the issue:

https://github.com/apache/flink/pull/3013
  
Max, this latest commit allows one to take advantage of ruby 2.0 if it is 
available, but is backwards compatible to 1.9. Hopefully this will satisfy 
everyone. The only issue I can see is that if one runs the build_docs script 
with ruby 2.x then Gemfile.lock will be updated with newer versions which 
probably should not be checked in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5344) docs don't build in dockerized jekyll; -p option is broken

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752377#comment-15752377
 ] 

ASF GitHub Bot commented on FLINK-5344:
---

Github user alpinegizmo commented on a diff in the pull request:

https://github.com/apache/flink/pull/3013#discussion_r92689908
  
--- Diff: docs/build_docs.sh ---
@@ -46,8 +46,8 @@ DOCS_DST=${DOCS_SRC}/content
 JEKYLL_CMD="build"
 
 # if -p flag is provided, serve site on localhost
-# -i is like -p, but incremental (which has some issues, but is very fast)
-while getopts ":p:i" opt; do
+# -i is like -p, but incremental (only rebuilds the modified file)
+while getopts "pi" opt; do
--- End diff --

Yes, this fixes the -p argument.


> docs don't build in dockerized jekyll; -p option is broken
> --
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 
> Also the changes to the build_docs script broke the -p option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3013: [FLINK-5344] relax spec for requested ruby version...

2016-12-15 Thread alpinegizmo
Github user alpinegizmo commented on a diff in the pull request:

https://github.com/apache/flink/pull/3013#discussion_r92689908
  
--- Diff: docs/build_docs.sh ---
@@ -46,8 +46,8 @@ DOCS_DST=${DOCS_SRC}/content
 JEKYLL_CMD="build"
 
 # if -p flag is provided, serve site on localhost
-# -i is like -p, but incremental (which has some issues, but is very fast)
-while getopts ":p:i" opt; do
+# -i is like -p, but incremental (only rebuilds the modified file)
+while getopts "pi" opt; do
--- End diff --

Yes, this fixes the -p argument.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5347) Unclosed stream in OperatorBackendSerializationProxy#write()

2016-12-15 Thread Ted Yu (JIRA)
Ted Yu created FLINK-5347:
-

 Summary: Unclosed stream in 
OperatorBackendSerializationProxy#write()
 Key: FLINK-5347
 URL: https://issues.apache.org/jira/browse/FLINK-5347
 Project: Flink
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
public void write(DataOutputView out) throws IOException {
  out.writeUTF(getName());
  DataOutputViewStream dos = new DataOutputViewStream(out);
  InstantiationUtil.serializeObject(dos, getStateSerializer());
}
{code}
dos should be closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2917: [FLINK-2821] use custom Akka build to listen on all inter...

2016-12-15 Thread mxm
Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2917
  
I've added the new staging repository to test the PR changes. Also, the 
repository is currently deploying to Maven central.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752145#comment-15752145
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2917
  
I've added the new staging repository to test the PR changes. Also, the 
repository is currently deploying to Maven central.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5097) The TypeExtractor is missing input type information in some Graph methods

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15752120#comment-15752120
 ] 

ASF GitHub Bot commented on FLINK-5097:
---

Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2842
  
Done. I'll wait for travis, then merge.


> The TypeExtractor is missing input type information in some Graph methods
> -
>
> Key: FLINK-5097
> URL: https://issues.apache.org/jira/browse/FLINK-5097
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>
> The TypeExtractor is called without information about the input type in 
> {{mapVertices}} and {{mapEdges}} although this information can be easily 
> retrieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2842: [FLINK-5097][gelly] Add missing input type information to...

2016-12-15 Thread vasia
Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2842
  
Done. I'll wait for travis, then merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #2917: [FLINK-2821] use custom Akka build to listen on all inter...

2016-12-15 Thread mxm
Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2917
  
Thanks you @tillrohrmann  and @StephanEwen. I've addressed your comments. 
I'll have to redeploy Flakka because the staging repository which this PR used, 
has been dropped in the meantime. I will update the PR tomorrow to use the 
Maven central servers.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751990#comment-15751990
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92661183
  
--- Diff: flink-runtime/pom.xml ---
@@ -193,8 +193,8 @@ under the License.

 

-   com.typesafe.akka
-   
akka-testkit_${scala.binary.version}
+   com.data-artisans
+   
flakka-testkit_${scala.binary.version}
--- End diff --

This probably needs to be changed independently of this PR.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92661183
  
--- Diff: flink-runtime/pom.xml ---
@@ -193,8 +193,8 @@ under the License.

 

-   com.typesafe.akka
-   
akka-testkit_${scala.binary.version}
+   com.data-artisans
+   
flakka-testkit_${scala.binary.version}
--- End diff --

This probably needs to be changed independently of this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751991#comment-15751991
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user mxm commented on the issue:

https://github.com/apache/flink/pull/2917
  
Thanks you @tillrohrmann  and @StephanEwen. I've addressed your comments. 
I'll have to redeploy Flakka because the staging repository which this PR used, 
has been dropped in the meantime. I will update the PR tomorrow to use the 
Maven central servers.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751968#comment-15751968
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92659443
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
+  externalHostname: String, externalPort: 
Int): Config = {
+
+LOG.info(s"Using binding address $hostname:$port" +
--- End diff --

That's right. Removing this statement because this is also logged at 
JobManager startup.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92659443
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
+  externalHostname: String, externalPort: 
Int): Config = {
+
+LOG.info(s"Using binding address $hostname:$port" +
--- End diff --

That's right. Removing this statement because this is also logged at 
JobManager startup.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751957#comment-15751957
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92658704
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
--- End diff --

+1


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92658704
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751950#comment-15751950
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92658254
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -102,21 +102,24 @@ object AkkaUtils {
* specified, then the actor system will listen on the respective 
address.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param listeningAddress optional tuple of hostname and port to listen 
on. If None is given,
-   * then an Akka config for local actor system 
will be returned
+   * @param externalAddress optional tuple of hostname and port to be 
reachable at.
+   *If None is given, then an Akka config for 
local actor system
+   *will be returned
* @return Akka config
*/
   @throws(classOf[UnknownHostException])
   def getAkkaConfig(configuration: Configuration,
-listeningAddress: Option[(String, Int)]): Config = {
+externalAddress: Option[(String, Int)]): Config = {
 val defaultConfig = getBasicAkkaConfig(configuration)
 
-listeningAddress match {
+externalAddress match {
 
   case Some((hostname, port)) =>
-val ipAddress = InetAddress.getByName(hostname)
-val hostString = "\"" + NetUtils.ipAddressToUrlString(ipAddress) + 
"\""
-val remoteConfig = getRemoteAkkaConfig(configuration, hostString, 
port)
+
+val remoteConfig = getRemoteAkkaConfig(configuration,
+  NetUtils.getWildcardIPAddress, port,
--- End diff --

+1


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92658254
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -102,21 +102,24 @@ object AkkaUtils {
* specified, then the actor system will listen on the respective 
address.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param listeningAddress optional tuple of hostname and port to listen 
on. If None is given,
-   * then an Akka config for local actor system 
will be returned
+   * @param externalAddress optional tuple of hostname and port to be 
reachable at.
+   *If None is given, then an Akka config for 
local actor system
+   *will be returned
* @return Akka config
*/
   @throws(classOf[UnknownHostException])
   def getAkkaConfig(configuration: Configuration,
-listeningAddress: Option[(String, Int)]): Config = {
+externalAddress: Option[(String, Int)]): Config = {
 val defaultConfig = getBasicAkkaConfig(configuration)
 
-listeningAddress match {
+externalAddress match {
 
   case Some((hostname, port)) =>
-val ipAddress = InetAddress.getByName(hostname)
-val hostString = "\"" + NetUtils.ipAddressToUrlString(ipAddress) + 
"\""
-val remoteConfig = getRemoteAkkaConfig(configuration, hostString, 
port)
+
+val remoteConfig = getRemoteAkkaConfig(configuration,
+  NetUtils.getWildcardIPAddress, port,
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5344) docs don't build in dockerized jekyll; -p option is broken

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751908#comment-15751908
 ] 

ASF GitHub Bot commented on FLINK-5344:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/3013#discussion_r92654203
  
--- Diff: docs/Gemfile ---
@@ -17,13 +17,14 @@
 

 
 source 'https://rubygems.org'
-ruby '~> 2.3.0'
+ruby '~> 2'
--- End diff --

When we upgraded this from `~> 1.9`, the nightly Buildbot stopped working: 
https://ci.apache.org/builders/flink-docs-master/builds/557/steps/Flink%20docs/logs/stdio
 


> docs don't build in dockerized jekyll; -p option is broken
> --
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 
> Also the changes to the build_docs script broke the -p option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5344) docs don't build in dockerized jekyll; -p option is broken

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751909#comment-15751909
 ] 

ASF GitHub Bot commented on FLINK-5344:
---

Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/3013#discussion_r92653981
  
--- Diff: docs/build_docs.sh ---
@@ -46,8 +46,8 @@ DOCS_DST=${DOCS_SRC}/content
 JEKYLL_CMD="build"
 
 # if -p flag is provided, serve site on localhost
-# -i is like -p, but incremental (which has some issues, but is very fast)
-while getopts ":p:i" opt; do
+# -i is like -p, but incremental (only rebuilds the modified file)
+while getopts "pi" opt; do
--- End diff --

Does this fix the `-p` argument? In the master, only the `-i` argument is 
working.


> docs don't build in dockerized jekyll; -p option is broken
> --
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 
> Also the changes to the build_docs script broke the -p option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3013: [FLINK-5344] relax spec for requested ruby version...

2016-12-15 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/3013#discussion_r92653981
  
--- Diff: docs/build_docs.sh ---
@@ -46,8 +46,8 @@ DOCS_DST=${DOCS_SRC}/content
 JEKYLL_CMD="build"
 
 # if -p flag is provided, serve site on localhost
-# -i is like -p, but incremental (which has some issues, but is very fast)
-while getopts ":p:i" opt; do
+# -i is like -p, but incremental (only rebuilds the modified file)
+while getopts "pi" opt; do
--- End diff --

Does this fix the `-p` argument? In the master, only the `-i` argument is 
working.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3013: [FLINK-5344] relax spec for requested ruby version...

2016-12-15 Thread mxm
Github user mxm commented on a diff in the pull request:

https://github.com/apache/flink/pull/3013#discussion_r92654203
  
--- Diff: docs/Gemfile ---
@@ -17,13 +17,14 @@
 

 
 source 'https://rubygems.org'
-ruby '~> 2.3.0'
+ruby '~> 2'
--- End diff --

When we upgraded this from `~> 1.9`, the nightly Buildbot stopped working: 
https://ci.apache.org/builders/flink-docs-master/builds/557/steps/Flink%20docs/logs/stdio
 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3001: [FLINK-4821] [kinesis] Implement rescalable non-partition...

2016-12-15 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3001
  
Thanks for the info Stefan! @tony810430 we'll probably need to block this 
PR for now, and refresh it once the unioned state interface comes around.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4821) Implement rescalable non-partitioned state for Kinesis Connector

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751901#comment-15751901
 ] 

ASF GitHub Bot commented on FLINK-4821:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3001
  
Thanks for the info Stefan! @tony810430 we'll probably need to block this 
PR for now, and refresh it once the unioned state interface comes around.


> Implement rescalable non-partitioned state for Kinesis Connector
> 
>
> Key: FLINK-4821
> URL: https://issues.apache.org/jira/browse/FLINK-4821
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Wei-Che Wei
> Fix For: 1.2.0
>
>
> FLINK-4379 added the rescalable non-partitioned state feature, along with the 
> implementation for the Kafka connector.
> The AWS Kinesis connector will benefit from the feature and should implement 
> it too. This ticket tracks progress for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5127) Reduce the amount of intermediate data in vertex-centric iterations

2016-12-15 Thread Vasia Kalavri (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751890#comment-15751890
 ] 

Vasia Kalavri commented on FLINK-5127:
--

It'd be nice to have for 1.2, but I don't know when I'll have time to work on 
it. I'm hoping this weekend.

> Reduce the amount of intermediate data in vertex-centric iterations
> ---
>
> Key: FLINK-5127
> URL: https://issues.apache.org/jira/browse/FLINK-5127
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>
> The vertex-centric plan contains a join between the workset (messages) and 
> the solution set (vertices) that outputs  tuples. This 
> intermediate dataset is then co-grouped with the edges to provide the Pregel 
> interface directly.
> This issue proposes an improvement to reduce the size of this intermediate 
> dataset. In particular, the vertex state does not have to be attached to all 
> the output tuples of the join. If we replace the join with a coGroup and use 
> an `Either` type, we can attach the vertex state to the first tuple only. The 
> subsequent coGroup can retrieve the vertex state from the first tuple and 
> correctly expose the Pregel interface.
> In my preliminary experiments, I find that this change reduces intermediate 
> data by 2x for small vertex state and 4-5x for large vertex states. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5097) The TypeExtractor is missing input type information in some Graph methods

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751854#comment-15751854
 ] 

ASF GitHub Bot commented on FLINK-5097:
---

Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2842
  
Sure, I can revert using `getMapReturnTypes` , rebase, and merge.


> The TypeExtractor is missing input type information in some Graph methods
> -
>
> Key: FLINK-5097
> URL: https://issues.apache.org/jira/browse/FLINK-5097
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>
> The TypeExtractor is called without information about the input type in 
> {{mapVertices}} and {{mapEdges}} although this information can be easily 
> retrieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2842: [FLINK-5097][gelly] Add missing input type information to...

2016-12-15 Thread vasia
Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2842
  
Sure, I can revert using `getMapReturnTypes` , rebase, and merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4821) Implement rescalable non-partitioned state for Kinesis Connector

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751849#comment-15751849
 ] 

ASF GitHub Bot commented on FLINK-4821:
---

Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/3001
  
I have an open PR #2948 that would introduce all the facilities for global 
and union state. It is just a matter of also exposing it two the user. I think 
at least for the union state, this can trivially be done if we have this in.


> Implement rescalable non-partitioned state for Kinesis Connector
> 
>
> Key: FLINK-4821
> URL: https://issues.apache.org/jira/browse/FLINK-4821
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Wei-Che Wei
> Fix For: 1.2.0
>
>
> FLINK-4379 added the rescalable non-partitioned state feature, along with the 
> implementation for the Kafka connector.
> The AWS Kinesis connector will benefit from the feature and should implement 
> it too. This ticket tracks progress for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5346) Remove all ad-hoc config loading via GlobalConfiguration

2016-12-15 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-5346:
---

 Summary: Remove all ad-hoc config loading via GlobalConfiguration
 Key: FLINK-5346
 URL: https://issues.apache.org/jira/browse/FLINK-5346
 Project: Flink
  Issue Type: Sub-task
  Components: Core
Reporter: Stephan Ewen
 Fix For: 2.0.0


I think we should get rid of the static calls to {{GlobalConfiguration}} that 
load configuration ad hoc. It will not properly work anyways because different 
setups (standalone / Yarn / Mesos / etc) store and access the configuration at 
different places.

The only point where the configuration should be loaded is in the entry points 
of the Processes (TaskManager, JobManager, ApplicationMaster, etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3001: [FLINK-4821] [kinesis] Implement rescalable non-partition...

2016-12-15 Thread StefanRRichter
Github user StefanRRichter commented on the issue:

https://github.com/apache/flink/pull/3001
  
I have an open PR #2948 that would introduce all the facilities for global 
and union state. It is just a matter of also exposing it two the user. I think 
at least for the union state, this can trivially be done if we have this in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5311) Write user documentation for BipartiteGraph

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751844#comment-15751844
 ] 

ASF GitHub Bot commented on FLINK-5311:
---

Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2984
  
Hi @mushketyk, thank you for the update!
Just a couple of small things and we can merge:
- Can you add a note in the beginning of the docs that bipartite graphs are 
only currently supported in the Gelly Java API?
- I would rename the "Graph transformations" section to "Projection".


> Write user documentation for BipartiteGraph
> ---
>
> Key: FLINK-5311
> URL: https://issues.apache.org/jira/browse/FLINK-5311
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Ivan Mushketyk
>Assignee: Ivan Mushketyk
>
> We need to add user documentation. The progress on BipartiteGraph can be 
> tracked in the following JIRA:
> https://issues.apache.org/jira/browse/FLINK-2254



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2984: [FLINK-5311] Add user documentation for bipartite graph

2016-12-15 Thread vasia
Github user vasia commented on the issue:

https://github.com/apache/flink/pull/2984
  
Hi @mushketyk, thank you for the update!
Just a couple of small things and we can merge:
- Can you add a note in the beginning of the docs that bipartite graphs are 
only currently supported in the Gelly Java API?
- I would rename the "Graph transformations" section to "Projection".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4821) Implement rescalable non-partitioned state for Kinesis Connector

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751837#comment-15751837
 ] 

ASF GitHub Bot commented on FLINK-4821:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3001
  
@tony810430 (cc @StephanEwen, f.y.i.)
At a second closer look, I'm afraid this PR can't be merged as is. The 
problem is that the state redistribution of `ListCheckpointed` doesn't work 
with the Kinesis consumer's current shard discovery mechanism.

On restore, each subtask uses the restored states it gets to appropriately 
set the "last seen shard ID" of the subtask. With this value set, the subtask 
is able to discover only shards after the "last seen shard ID". Then, the 
subtask determines which of the newly discovered shards it should be 
responsible of consuming, using a simple modulo operation on the shards' hash 
values.

This works before when restored state could not be redistributed, because 
subtasks will always be restored shards which belong to that subtask (i.e. via 
the modulo on hash operation).

The state redistribution on restore for `ListCheckpointed` breaks this. For 
example:
Job starts with only 1 subtask for FlinkKinesisConsumer, and the Kinesis 
stream has 2 shards:
subtask #1 --> shard1, shard2.

After a restore with increased parallelism to 2, let's say the list state 
gets redistributed as:
subtask #1 --> shard1
subtask #2 --> shard2

Subtask #1's _last seen shard ID_ will be set to shard1, and will therefore 
discover shard2 as a new shard afterwards. If the shard2 gets hashed to subtask 
#1, we'll have both subtasks consuming shard2.

Changing the hashing / subtask-to-shard assignment determination for the 
shard discovery probably can't solve the problem, because no matter how we 
change that, it'll still be dependent of what the list state redistribution 
looks like.

The only way I can see in solving this would probably be have merged state 
on restore, so that all subtasks may set the "last seen shard ID" to the 
largest ID across all subtasks, not just the local subtask.

In flip-8 I see the community has discussed an interface for merged state 
also (a unioned list state on restore). I think that will be really useful in 
this particular case here. It'll also be relevant for the Kafka connector, 
right now it seems irrelevant only because the Kafka consumer doesn't have 
partition discovery yet.
@StefanRRichter could you probably provide some insight on the merged state 
aspect? I'm not that familiar yet with the recent works and progress on the 
repartitionable states.


> Implement rescalable non-partitioned state for Kinesis Connector
> 
>
> Key: FLINK-4821
> URL: https://issues.apache.org/jira/browse/FLINK-4821
> Project: Flink
>  Issue Type: New Feature
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Wei-Che Wei
> Fix For: 1.2.0
>
>
> FLINK-4379 added the rescalable non-partitioned state feature, along with the 
> implementation for the Kafka connector.
> The AWS Kinesis connector will benefit from the feature and should implement 
> it too. This ticket tracks progress for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #3001: [FLINK-4821] [kinesis] Implement rescalable non-partition...

2016-12-15 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/3001
  
@tony810430 (cc @StephanEwen, f.y.i.)
At a second closer look, I'm afraid this PR can't be merged as is. The 
problem is that the state redistribution of `ListCheckpointed` doesn't work 
with the Kinesis consumer's current shard discovery mechanism.

On restore, each subtask uses the restored states it gets to appropriately 
set the "last seen shard ID" of the subtask. With this value set, the subtask 
is able to discover only shards after the "last seen shard ID". Then, the 
subtask determines which of the newly discovered shards it should be 
responsible of consuming, using a simple modulo operation on the shards' hash 
values.

This works before when restored state could not be redistributed, because 
subtasks will always be restored shards which belong to that subtask (i.e. via 
the modulo on hash operation).

The state redistribution on restore for `ListCheckpointed` breaks this. For 
example:
Job starts with only 1 subtask for FlinkKinesisConsumer, and the Kinesis 
stream has 2 shards:
subtask #1 --> shard1, shard2.

After a restore with increased parallelism to 2, let's say the list state 
gets redistributed as:
subtask #1 --> shard1
subtask #2 --> shard2

Subtask #1's _last seen shard ID_ will be set to shard1, and will therefore 
discover shard2 as a new shard afterwards. If the shard2 gets hashed to subtask 
#1, we'll have both subtasks consuming shard2.

Changing the hashing / subtask-to-shard assignment determination for the 
shard discovery probably can't solve the problem, because no matter how we 
change that, it'll still be dependent of what the list state redistribution 
looks like.

The only way I can see in solving this would probably be have merged state 
on restore, so that all subtasks may set the "last seen shard ID" to the 
largest ID across all subtasks, not just the local subtask.

In flip-8 I see the community has discussed an interface for merged state 
also (a unioned list state on restore). I think that will be really useful in 
this particular case here. It'll also be relevant for the Kafka connector, 
right now it seems irrelevant only because the Kafka consumer doesn't have 
partition discovery yet.
@StefanRRichter could you probably provide some insight on the merged state 
aspect? I'm not that familiar yet with the recent works and progress on the 
repartitionable states.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-15 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751722#comment-15751722
 ] 

Stephan Ewen commented on FLINK-5345:
-

I think that is a problem of {{org.apache.commons.io.FileUtils}}: When someone 
concurrently works on the directory, the delete fails.

We should have our own utility method for recursive directory that retries 
listing and deleting contained files to be safe against concurrent deletes by 
other services.

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>  Labels: simplex, starter
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-15 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-5345:

Labels: simplex starter  (was: simplex)

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>  Labels: simplex, starter
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-15 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-5345:

Labels: simplex  (was: )

> IOManager failed to properly clean up temp file directory
> -
>
> Key: FLINK-5345
> URL: https://issues.apache.org/jira/browse/FLINK-5345
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: Robert Metzger
>  Labels: simplex, starter
>
> While testing 1.1.3 RC3, I have the following message in my log:
> {code}
> 2016-12-15 14:46:05,450 INFO  
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
> is shutting down.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: control events generator (29/40) 
> (73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
> 2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: control events generator 
> (29/40) (73915a232ba09e642f9dff92f8c8773a).
> 2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> CANCELED to JobManager for task Source: control events genera
> tor (73915a232ba09e642f9dff92f8c8773a)
> 2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner 
>   - RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
> 2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache 
>   - Shutting down BlobCache
> 2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor  
>   - Association with remote system 
> [akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
> [5000] ms.
>  Reason is: [Disassociated].
> 2016-12-15 14:46:40,808 ERROR 
> org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
> failed to properly clean up temp file directory: 
> /yarn/nm/usercache/robert/appcache/application_148129128
> 9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
> java.lang.IllegalArgumentException: 
> /yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
>  does not exist
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
> at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
> at 
> org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
> at 
> org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)
> {code}
> This was the last message logged from that machine. I suspect two threads are 
> trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5344) docs don't build in dockerized jekyll; -p option is broken

2016-12-15 Thread David Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Anderson updated FLINK-5344:
--
Description: 
The recent Gemfile update doesn't work with the ruby in the provided dockerized 
jekyll environment. 

Also the changes to the build_docs script broke the -p option.

  was:The recent Gemfile update doesn't work with the ruby in the provided 
dockerized jekyll environment. 

Summary: docs don't build in dockerized jekyll; -p option is broken  
(was: docs don't build in dockerized jekyll)

> docs don't build in dockerized jekyll; -p option is broken
> --
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 
> Also the changes to the build_docs script broke the -p option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5280) Extend TableSource to support nested data

2016-12-15 Thread Ivan Mushketyk (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751685#comment-15751685
 ] 

Ivan Mushketyk commented on FLINK-5280:
---

Hi [~jark]

Your suggestion makes sense to me.

>> But currently, the RowTypeInfo doesn't support custom field names, so we 
>> should fix that first.
Can we do it as part of this issue?

> Extend TableSource to support nested data
> -
>
> Key: FLINK-5280
> URL: https://issues.apache.org/jira/browse/FLINK-5280
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.2.0
>Reporter: Fabian Hueske
>Assignee: Ivan Mushketyk
>
> The {{TableSource}} interface does currently only support the definition of 
> flat rows. 
> However, there are several storage formats for nested data that should be 
> supported such as Avro, Json, Parquet, and Orc. The Table API and SQL can 
> also natively handle nested rows.
> The {{TableSource}} interface and the code to register table sources in 
> Calcite's schema need to be extended to support nested data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4631) NullPointerException during stream task cleanup

2016-12-15 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751655#comment-15751655
 ] 

Robert Metzger commented on FLINK-4631:
---

The issue persists for Flink 1.1.4 RC3

> NullPointerException during stream task cleanup
> ---
>
> Key: FLINK-4631
> URL: https://issues.apache.org/jira/browse/FLINK-4631
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.1.2
> Environment: Ubuntu server 12.04.5 64 bit
> java version "1.8.0_40"
> Java(TM) SE Runtime Environment (build 1.8.0_40-b26)
> Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode)
>Reporter: Avihai Berkovitz
> Fix For: 1.2.0
>
>
> If a streaming job failed during startup (in my case, due to lack of network 
> buffers), all the tasks are being cancelled before they started. This causes 
> many instances of the following exception:
> {noformat}
> 2016-09-18 14:17:12,177 ERROR 
> org.apache.flink.streaming.runtime.tasks.StreamTask   - Error during 
> cleanup of stream task
> java.lang.NullPointerException
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.cleanup(OneInputStreamTask.java:73)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:323)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2985: [FLINK-5104] Bipartite graph validation

2016-12-15 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2985
  
@mushketyk I had meant just with a simple comment noting that with an 
anti-join we could also remove the FlatJoinFunction.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5104) Implement BipartiteGraph validator

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751646#comment-15751646
 ] 

ASF GitHub Bot commented on FLINK-5104:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2985
  
@mushketyk I had meant just with a simple comment noting that with an 
anti-join we could also remove the FlatJoinFunction.


> Implement BipartiteGraph validator
> --
>
> Key: FLINK-5104
> URL: https://issues.apache.org/jira/browse/FLINK-5104
> Project: Flink
>  Issue Type: Sub-task
>  Components: Gelly
>Reporter: Ivan Mushketyk
>Assignee: Ivan Mushketyk
>
> BipartiteGraph should have a validator similar to GraphValidator for Graph 
> class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5345) IOManager failed to properly clean up temp file directory

2016-12-15 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-5345:
-

 Summary: IOManager failed to properly clean up temp file directory
 Key: FLINK-5345
 URL: https://issues.apache.org/jira/browse/FLINK-5345
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Robert Metzger


While testing 1.1.3 RC3, I have the following message in my log:

{code}
2016-12-15 14:46:05,450 INFO  
org.apache.flink.streaming.runtime.tasks.StreamTask   - Timer service 
is shutting down.
2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task 
- Source: control events generator (29/40) 
(73915a232ba09e642f9dff92f8c8773a) switched from CANCELING to CANCELED.
2016-12-15 14:46:05,452 INFO  org.apache.flink.runtime.taskmanager.Task 
- Freeing task resources for Source: control events generator 
(29/40) (73915a232ba09e642f9dff92f8c8773a).
2016-12-15 14:46:05,454 INFO  org.apache.flink.yarn.YarnTaskManager 
- Un-registering task and sending final execution state CANCELED to 
JobManager for task Source: control events genera
tor (73915a232ba09e642f9dff92f8c8773a)
2016-12-15 14:46:40,609 INFO  org.apache.flink.yarn.YarnTaskManagerRunner   
- RECEIVED SIGNAL 15: SIGTERM. Shutting down as requested.
2016-12-15 14:46:40,611 INFO  org.apache.flink.runtime.blob.BlobCache   
- Shutting down BlobCache
2016-12-15 14:46:40,724 WARN  akka.remote.ReliableDeliverySupervisor
- Association with remote system 
[akka.tcp://flink@10.240.0.34:33635] has failed, address is now gated for 
[5000] ms.
 Reason is: [Disassociated].
2016-12-15 14:46:40,808 ERROR 
org.apache.flink.runtime.io.disk.iomanager.IOManager  - IOManager 
failed to properly clean up temp file directory: 
/yarn/nm/usercache/robert/appcache/application_148129128
9979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5
java.lang.IllegalArgumentException: 
/yarn/nm/usercache/robert/appcache/application_1481291289979_0024/flink-io-f0ff3f66-b9e2-4560-881f-2ab43bc448b5/62e14e1891fe1e334c921dfd19a32a84/StreamMap_11_24/dummy_state
 does not exist
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1637)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.commons.io.FileUtils.forceDelete(FileUtils.java:2270)
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1653)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at 
org.apache.flink.runtime.io.disk.iomanager.IOManager.shutdown(IOManager.java:109)
at 
org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync.shutdown(IOManagerAsync.java:185)
at 
org.apache.flink.runtime.io.disk.iomanager.IOManagerAsync$1.run(IOManagerAsync.java:105)

{code}

This was the last message logged from that machine. I suspect two threads are 
trying to clean up the directories during shutdown?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5097) The TypeExtractor is missing input type information in some Graph methods

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751615#comment-15751615
 ] 

ASF GitHub Bot commented on FLINK-5097:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2842
  
@vasia, as this PR fixes the reported issue, should we rebase and commit 
for 1.2 and leave the remaining updates for a separate ticket?


> The TypeExtractor is missing input type information in some Graph methods
> -
>
> Key: FLINK-5097
> URL: https://issues.apache.org/jira/browse/FLINK-5097
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>
> The TypeExtractor is called without information about the input type in 
> {{mapVertices}} and {{mapEdges}} although this information can be easily 
> retrieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2842: [FLINK-5097][gelly] Add missing input type information to...

2016-12-15 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2842
  
@vasia, as this PR fixes the reported issue, should we rebase and commit 
for 1.2 and leave the remaining updates for a separate ticket?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5127) Reduce the amount of intermediate data in vertex-centric iterations

2016-12-15 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751600#comment-15751600
 ] 

Greg Hogan commented on FLINK-5127:
---

[~vkalavri], is this improvement a candidate for the 1.2 release?

> Reduce the amount of intermediate data in vertex-centric iterations
> ---
>
> Key: FLINK-5127
> URL: https://issues.apache.org/jira/browse/FLINK-5127
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>
> The vertex-centric plan contains a join between the workset (messages) and 
> the solution set (vertices) that outputs  tuples. This 
> intermediate dataset is then co-grouped with the edges to provide the Pregel 
> interface directly.
> This issue proposes an improvement to reduce the size of this intermediate 
> dataset. In particular, the vertex state does not have to be attached to all 
> the output tuples of the join. If we replace the join with a coGroup and use 
> an `Either` type, we can attach the vertex state to the first tuple only. The 
> subsequent coGroup can retrieve the vertex state from the first tuple and 
> correctly expose the Pregel interface.
> In my preliminary experiments, I find that this change reduces intermediate 
> data by 2x for small vertex state and 4-5x for large vertex states. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751598#comment-15751598
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92626387
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
+  externalHostname: String, externalPort: 
Int): Config = {
+
+LOG.info(s"Using binding address $hostname:$port" +
--- End diff --

Not too bad here, but in general, I think having loggers in util classes is 
not a very good idea. It typically leads to the log lines showing up very often 
and at strange places. The information should usually be logged by the caller 
of the functions.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92626387
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
+  externalHostname: String, externalPort: 
Int): Config = {
+
+LOG.info(s"Using binding address $hostname:$port" +
--- End diff --

Not too bad here, but in general, I think having loggers in util classes is 
not a very good idea. It typically leads to the log lines showing up very often 
and at strange places. The information should usually be logged by the caller 
of the functions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751596#comment-15751596
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92626161
  
--- Diff: flink-core/src/main/java/org/apache/flink/util/NetUtils.java ---
@@ -111,7 +115,51 @@ public static int getAvailablePort() {
// 

//  Encoding of IP addresses for URLs
// 

-   
+
+   /**
+* Returns an address in a normalized format for Akka.
+* When an IPv6 address is specified, it normalizes the IPv6 address to 
avoid
+* complications with the exact URL match policy of Akka.
+* @param host The hostname, IPv4 or IPv6 address
+* @return host which will be normalized if it is an IPv6 address
+*/
+   public static String unresolvedHostToNormalizedString(String host) {
+   // Return loopback interface address if host is null
+   // This represents the behavior of {@code InetAddress.getByName 
} and RFC 3330
+   if (host == null) {
+   host = 
InetAddress.getLoopbackAddress().getHostAddress();
+   } else {
+   host = host.trim().toLowerCase();
+   }
+
+   // normalize and valid address
+   if (IPAddressUtil.isIPv6LiteralAddress(host)) {
+   byte[] ipV6Address = 
IPAddressUtil.textToNumericFormatV6(host);
+   host = getIPv6UrlRepresentation(ipV6Address);
+   } else if (!IPAddressUtil.isIPv4LiteralAddress(host)) {
+   // We don't allow these in hostnames
+   Preconditions.checkArgument(!host.startsWith("."));
+   Preconditions.checkArgument(!host.endsWith("."));
+   Preconditions.checkArgument(!host.contains(":"));
--- End diff --

Maybe we could add a clarifying error message here for the user.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92626161
  
--- Diff: flink-core/src/main/java/org/apache/flink/util/NetUtils.java ---
@@ -111,7 +115,51 @@ public static int getAvailablePort() {
// 

//  Encoding of IP addresses for URLs
// 

-   
+
+   /**
+* Returns an address in a normalized format for Akka.
+* When an IPv6 address is specified, it normalizes the IPv6 address to 
avoid
+* complications with the exact URL match policy of Akka.
+* @param host The hostname, IPv4 or IPv6 address
+* @return host which will be normalized if it is an IPv6 address
+*/
+   public static String unresolvedHostToNormalizedString(String host) {
+   // Return loopback interface address if host is null
+   // This represents the behavior of {@code InetAddress.getByName 
} and RFC 3330
+   if (host == null) {
+   host = 
InetAddress.getLoopbackAddress().getHostAddress();
+   } else {
+   host = host.trim().toLowerCase();
+   }
+
+   // normalize and valid address
+   if (IPAddressUtil.isIPv6LiteralAddress(host)) {
+   byte[] ipV6Address = 
IPAddressUtil.textToNumericFormatV6(host);
+   host = getIPv6UrlRepresentation(ipV6Address);
+   } else if (!IPAddressUtil.isIPv4LiteralAddress(host)) {
+   // We don't allow these in hostnames
+   Preconditions.checkArgument(!host.startsWith("."));
+   Preconditions.checkArgument(!host.endsWith("."));
+   Preconditions.checkArgument(!host.contains(":"));
--- End diff --

Maybe we could add a clarifying error message here for the user.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751592#comment-15751592
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92625848
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
--- End diff --

Would be good to call `hostname` here `bind address`.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92625848
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -216,12 +219,19 @@ object AkkaUtils {
* identified by hostname.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param hostname of the network interface to listen on
+   * @param hostname of the network interface to bind on
* @param port to bind to or if 0 then Akka picks a free port 
automatically
+   * @param externalHostname The host name to expect for Akka messages
+   * @param externalPort The port to expect for Akka messages
* @return Flink's Akka configuration for remote actor systems
*/
   private def getRemoteAkkaConfig(configuration: Configuration,
-  hostname: String, port: Int): Config = {
+  hostname: String, port: Int,
--- End diff --

Would be good to call `hostname` here `bind address`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92625229
  
--- Diff: flink-runtime/pom.xml ---
@@ -193,8 +193,8 @@ under the License.

 

-   com.typesafe.akka
-   
akka-testkit_${scala.binary.version}
+   com.data-artisans
+   
flakka-testkit_${scala.binary.version}
--- End diff --

I think this dependency be in `test` scope.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751583#comment-15751583
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92625229
  
--- Diff: flink-runtime/pom.xml ---
@@ -193,8 +193,8 @@ under the License.

 

-   com.typesafe.akka
-   
akka-testkit_${scala.binary.version}
+   com.data-artisans
+   
flakka-testkit_${scala.binary.version}
--- End diff --

I think this dependency be in `test` scope.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5344) docs don't build in dockerized jekyll

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751553#comment-15751553
 ] 

ASF GitHub Bot commented on FLINK-5344:
---

GitHub user alpinegizmo opened a pull request:

https://github.com/apache/flink/pull/3013

[FLINK-5344] relax spec for requested ruby version so the docs can be…

With this update, the docs can be built with any version of ruby from 2.0 
onward -- I've tested with ruby 2.0 in our dockerized jekyll, and with ruby 2.3 
on ubuntu 16.04. Otherwise the run.sh script in docs/docker ran into problems.

The json gem needed to be explicitly pulled in because in early versions of 
ruby 2.0 it wasn't yet in the ruby stdlib.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alpinegizmo/flink 
5344-update-gemfile-for-dockerized-jekyll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3013


commit 2ff29149b713354acc338a923aa004ba9a55108b
Author: David Anderson 
Date:   2016-12-15T14:42:45Z

[FLINK-5344] relax spec for requested ruby version so the docs can be built 
in a wider variety of systems, including our dockerized jekyll environment




> docs don't build in dockerized jekyll
> -
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2952: [FLINK-5011] [types] TraversableSerializer does not perfo...

2016-12-15 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2952
  
@StephanEwen I moved the ignored tests to the correct serializer test class.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5011) TraversableSerializer does not perform a deep copy of the elements it is traversing

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751560#comment-15751560
 ] 

ASF GitHub Bot commented on FLINK-5011:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/2952
  
@StephanEwen I moved the ignored tests to the correct serializer test class.


> TraversableSerializer does not perform a deep copy of the elements it is 
> traversing
> ---
>
> Key: FLINK-5011
> URL: https://issues.apache.org/jira/browse/FLINK-5011
> Project: Flink
>  Issue Type: Bug
>  Components: Core, Scala API
>Affects Versions: 1.1.3
>Reporter: Dan Bress
>Assignee: Timo Walther
>Priority: Blocker
>  Labels: serialization
> Fix For: 1.2.0
>
>
> I had an issue where the state in my rolling window was incorrectly being 
> maintained from window to window.  
> *The initial state of my window looked like this:*
> {code}
> Map[Key, MutableValue] = {("A", Value(0)}, ("B", Value(0)}
> {code}
> *Then in Window 0 I update the state so it looks like this at the close of 
> the window:*
> {code}
> Map[Key, MutableValue] = {("A", Value(1)}, ("B", Value(3)}
> {code}
> *Then at the start of Window 1 the state looks like it did at the end of 
> Window 0:*
> {code}
> Map[Key, MutableValue] = {("A", Value(1)}, ("B", Value(3)}
> {code}
> *when I expected it to look like the initial state:*
> {code}
> Map[Key, MutableValue] = {("A", Value(0)}, ("B", Value(0)}
> {code}
> It looks like 
> [TraversableSerializer|https://github.com/apache/flink/blob/master/flink-scala/src/main/scala/org/apache/flink/api/scala/typeutils/TraversableSerializer.scala#L65-L69]
>  is doing a shallow copy of the elements in the traversable instead of a deep 
> copy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5344) docs don't build in dockerized jekyll

2016-12-15 Thread David Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Anderson reassigned FLINK-5344:
-

Assignee: David Anderson

> docs don't build in dockerized jekyll
> -
>
> Key: FLINK-5344
> URL: https://issues.apache.org/jira/browse/FLINK-5344
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Minor
>
> The recent Gemfile update doesn't work with the ruby in the provided 
> dockerized jekyll environment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #3013: [FLINK-5344] relax spec for requested ruby version...

2016-12-15 Thread alpinegizmo
GitHub user alpinegizmo opened a pull request:

https://github.com/apache/flink/pull/3013

[FLINK-5344] relax spec for requested ruby version so the docs can be…

With this update, the docs can be built with any version of ruby from 2.0 
onward -- I've tested with ruby 2.0 in our dockerized jekyll, and with ruby 2.3 
on ubuntu 16.04. Otherwise the run.sh script in docs/docker ran into problems.

The json gem needed to be explicitly pulled in because in early versions of 
ruby 2.0 it wasn't yet in the ruby stdlib.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alpinegizmo/flink 
5344-update-gemfile-for-dockerized-jekyll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3013.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3013


commit 2ff29149b713354acc338a923aa004ba9a55108b
Author: David Anderson 
Date:   2016-12-15T14:42:45Z

[FLINK-5344] relax spec for requested ruby version so the docs can be built 
in a wider variety of systems, including our dockerized jekyll environment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #:

2016-12-15 Thread NicoK
Github user NicoK commented on the pull request:


https://github.com/apache/flink/commit/79d7e3017efe7c96e449e6f339fd7184ef3d1ba2#commitcomment-20200919
  
In docs/Gemfile:
In docs/Gemfile on line 23:
seems that `./build_docs -p` is broken, i.e. it does neither enable 
auto-regeneration nor serve the docs locally


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2821) Change Akka configuration to allow accessing actors from different URLs

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751532#comment-15751532
 ] 

ASF GitHub Bot commented on FLINK-2821:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92620428
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -102,21 +102,24 @@ object AkkaUtils {
* specified, then the actor system will listen on the respective 
address.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param listeningAddress optional tuple of hostname and port to listen 
on. If None is given,
-   * then an Akka config for local actor system 
will be returned
+   * @param externalAddress optional tuple of hostname and port to be 
reachable at.
+   *If None is given, then an Akka config for 
local actor system
+   *will be returned
* @return Akka config
*/
   @throws(classOf[UnknownHostException])
   def getAkkaConfig(configuration: Configuration,
-listeningAddress: Option[(String, Int)]): Config = {
+externalAddress: Option[(String, Int)]): Config = {
 val defaultConfig = getBasicAkkaConfig(configuration)
 
-listeningAddress match {
+externalAddress match {
 
   case Some((hostname, port)) =>
-val ipAddress = InetAddress.getByName(hostname)
-val hostString = "\"" + NetUtils.ipAddressToUrlString(ipAddress) + 
"\""
-val remoteConfig = getRemoteAkkaConfig(configuration, hostString, 
port)
+
+val remoteConfig = getRemoteAkkaConfig(configuration,
+  NetUtils.getWildcardIPAddress, port,
--- End diff --

Maybe we could add a comment here that we choose the wildcard ip to bind to 
all network interfaces.


> Change Akka configuration to allow accessing actors from different URLs
> ---
>
> Key: FLINK-2821
> URL: https://issues.apache.org/jira/browse/FLINK-2821
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Reporter: Robert Metzger
>Assignee: Maximilian Michels
>
> Akka expects the actor's URL to be exactly matching.
> As pointed out here, cases where users were complaining about this: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-trying-to-access-JM-through-proxy-td3018.html
>   - Proxy routing (as described here, send to the proxy URL, receiver 
> recognizes only original URL)
>   - Using hostname / IP interchangeably does not work (we solved this by 
> always putting IP addresses into URLs, never hostnames)
>   - Binding to multiple interfaces (any local 0.0.0.0) does not work. Still 
> no solution to that (but seems not too much of a restriction)
> I am aware that this is not possible due to Akka, so it is actually not a 
> Flink bug. But I think we should track the resolution of the issue here 
> anyways because its affecting our user's satisfaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #:

2016-12-15 Thread NicoK
Github user NicoK commented on the pull request:


https://github.com/apache/flink/commit/79d7e3017efe7c96e449e6f339fd7184ef3d1ba2#commitcomment-20200802
  
In docs/Gemfile:
In docs/Gemfile on line 20:
was it necessary to increase this dependency?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2917: [FLINK-2821] use custom Akka build to listen on al...

2016-12-15 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/2917#discussion_r92620428
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala ---
@@ -102,21 +102,24 @@ object AkkaUtils {
* specified, then the actor system will listen on the respective 
address.
*
* @param configuration instance containing the user provided 
configuration values
-   * @param listeningAddress optional tuple of hostname and port to listen 
on. If None is given,
-   * then an Akka config for local actor system 
will be returned
+   * @param externalAddress optional tuple of hostname and port to be 
reachable at.
+   *If None is given, then an Akka config for 
local actor system
+   *will be returned
* @return Akka config
*/
   @throws(classOf[UnknownHostException])
   def getAkkaConfig(configuration: Configuration,
-listeningAddress: Option[(String, Int)]): Config = {
+externalAddress: Option[(String, Int)]): Config = {
 val defaultConfig = getBasicAkkaConfig(configuration)
 
-listeningAddress match {
+externalAddress match {
 
   case Some((hostname, port)) =>
-val ipAddress = InetAddress.getByName(hostname)
-val hostString = "\"" + NetUtils.ipAddressToUrlString(ipAddress) + 
"\""
-val remoteConfig = getRemoteAkkaConfig(configuration, hostString, 
port)
+
+val remoteConfig = getRemoteAkkaConfig(configuration,
+  NetUtils.getWildcardIPAddress, port,
--- End diff --

Maybe we could add a comment here that we choose the wildcard ip to bind to 
all network interfaces.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5344) docs don't build in dockerized jekyll

2016-12-15 Thread David Anderson (JIRA)
David Anderson created FLINK-5344:
-

 Summary: docs don't build in dockerized jekyll
 Key: FLINK-5344
 URL: https://issues.apache.org/jira/browse/FLINK-5344
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: David Anderson
Priority: Minor


The recent Gemfile update doesn't work with the ruby in the provided dockerized 
jekyll environment. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5008) Update quickstart documentation

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751521#comment-15751521
 ] 

ASF GitHub Bot commented on FLINK-5008:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/2764#discussion_r92619301
  
--- Diff: README.md ---
@@ -104,25 +104,11 @@ Check out our [Setting up 
IntelliJ](https://github.com/apache/flink/blob/master/
 
 ### Eclipse Scala IDE
 
-For Eclipse users, we recommend using Scala IDE 3.0.3, based on Eclipse 
Kepler. While this is a slightly older version,
-we found it to be the version that works most robustly for a complex 
project like Flink.
-
-Further details, and a guide to newer Scala IDE versions can be found in 
the
-[How to setup 
Eclipse](https://github.com/apache/flink/blob/master/docs/internals/ide_setup.md#eclipse)
 docs.
-
-**Note:** Before following this setup, make sure to run the build from the 
command line once
-(`mvn clean install -DskipTests`, see above)
-
-1. Download the Scala IDE (preferred) or install the plugin to Eclipse 
Kepler. See 
-   [How to setup 
Eclipse](https://github.com/apache/flink/blob/master/docs/internals/ide_setup.md#eclipse)
 for download links and instructions.
-2. Add the "macroparadise" compiler plugin to the Scala compiler.
-   Open "Window" -> "Preferences" -> "Scala" -> "Compiler" -> "Advanced" 
and put into the "Xplugin" field the path to
-   the *macroparadise* jar file (typically 
"/home/*-your-user-*/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar").
-   Note: If you do not have the jar file, you probably did not run the 
command line build.
-3. Import the Flink Maven projects ("File" -> "Import" -> "Maven" -> 
"Existing Maven Projects") 
-4. During the import, Eclipse will ask to automatically install additional 
Maven build helper plugins.
-5. Close the "flink-java8" project. Since Eclipse Kepler does not support 
Java 8, you cannot develop this project.
+**NOTE:** From our experience, this setup does not work with Flink
--- End diff --

nice spotting this - changed that now


> Update quickstart documentation
> ---
>
> Key: FLINK-5008
> URL: https://issues.apache.org/jira/browse/FLINK-5008
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Minor
>
> * The IDE setup documentation of Flink is outdated in both parts: IntelliJ 
> IDEA was based on an old version and Eclipse/Scala IDE does not work at all 
> anymore.
> * The example in the "Quickstart: Setup" is outdated and requires "." to be 
> in the path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2764: [FLINK-5008] Update quickstart documentation

2016-12-15 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/2764#discussion_r92619301
  
--- Diff: README.md ---
@@ -104,25 +104,11 @@ Check out our [Setting up 
IntelliJ](https://github.com/apache/flink/blob/master/
 
 ### Eclipse Scala IDE
 
-For Eclipse users, we recommend using Scala IDE 3.0.3, based on Eclipse 
Kepler. While this is a slightly older version,
-we found it to be the version that works most robustly for a complex 
project like Flink.
-
-Further details, and a guide to newer Scala IDE versions can be found in 
the
-[How to setup 
Eclipse](https://github.com/apache/flink/blob/master/docs/internals/ide_setup.md#eclipse)
 docs.
-
-**Note:** Before following this setup, make sure to run the build from the 
command line once
-(`mvn clean install -DskipTests`, see above)
-
-1. Download the Scala IDE (preferred) or install the plugin to Eclipse 
Kepler. See 
-   [How to setup 
Eclipse](https://github.com/apache/flink/blob/master/docs/internals/ide_setup.md#eclipse)
 for download links and instructions.
-2. Add the "macroparadise" compiler plugin to the Scala compiler.
-   Open "Window" -> "Preferences" -> "Scala" -> "Compiler" -> "Advanced" 
and put into the "Xplugin" field the path to
-   the *macroparadise* jar file (typically 
"/home/*-your-user-*/.m2/repository/org/scalamacros/paradise_2.10.4/2.0.1/paradise_2.10.4-2.0.1.jar").
-   Note: If you do not have the jar file, you probably did not run the 
command line build.
-3. Import the Flink Maven projects ("File" -> "Import" -> "Maven" -> 
"Existing Maven Projects") 
-4. During the import, Eclipse will ask to automatically install additional 
Maven build helper plugins.
-5. Close the "flink-java8" project. Since Eclipse Kepler does not support 
Java 8, you cannot develop this project.
+**NOTE:** From our experience, this setup does not work with Flink
--- End diff --

nice spotting this - changed that now


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4611) Make "AUTO" credential provider as default for Kinesis Connector

2016-12-15 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751466#comment-15751466
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-4611:


Resolved in {{master}} via 
http://git-wip-us.apache.org/repos/asf/flink/commit/4666e65.

Thank you for the contribution [~tonywei]!

> Make "AUTO" credential provider as default for Kinesis Connector
> 
>
> Key: FLINK-4611
> URL: https://issues.apache.org/jira/browse/FLINK-4611
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Wei-Che Wei
> Fix For: 1.2.0
>
>
> Right now, the Kinesis Consumer / Producer by default directly expects the 
> access key id and secret access key to be given in the config properties.
> This isn't a good practice for accessing AWS services, and usually Kinesis 
> users would most likely be running their Flink application in AWS instances 
> that have embedded credentials that can be access via the default credential 
> provider chain. Therefore, it makes sense to change the default 
> {{AWS_CREDENTIALS_PROVIDER}} to {{AUTO}} instead of {{BASIC}}.
> To avoid breaking user code, we only use directly supplied AWS credentials if 
> both access key and secret key is given through {{AWS_ACCESS_KEY}} and 
> {{AWS_SECRET_KEY}}. Otherwise, the default credential provider chain is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2723: [FLINK-3617] Fixed NPE from CaseClassSerializer

2016-12-15 Thread chermenin
Github user chermenin commented on the issue:

https://github.com/apache/flink/pull/2723
  
No-no, i'm forced to disagree with you... IMHO, `null` is a value of 
concrete type and it must be processed by individual serializer for this type.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4611) Make "AUTO" credential provider as default for Kinesis Connector

2016-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751463#comment-15751463
 ] 

ASF GitHub Bot commented on FLINK-4611:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2914


> Make "AUTO" credential provider as default for Kinesis Connector
> 
>
> Key: FLINK-4611
> URL: https://issues.apache.org/jira/browse/FLINK-4611
> Project: Flink
>  Issue Type: Improvement
>  Components: Kinesis Connector
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Wei-Che Wei
> Fix For: 1.2.0
>
>
> Right now, the Kinesis Consumer / Producer by default directly expects the 
> access key id and secret access key to be given in the config properties.
> This isn't a good practice for accessing AWS services, and usually Kinesis 
> users would most likely be running their Flink application in AWS instances 
> that have embedded credentials that can be access via the default credential 
> provider chain. Therefore, it makes sense to change the default 
> {{AWS_CREDENTIALS_PROVIDER}} to {{AUTO}} instead of {{BASIC}}.
> To avoid breaking user code, we only use directly supplied AWS credentials if 
> both access key and secret key is given through {{AWS_ACCESS_KEY}} and 
> {{AWS_SECRET_KEY}}. Otherwise, the default credential provider chain is used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >