[jira] [Commented] (FLINK-2168) Add fromHBase() to TableEnvironment

2015-09-29 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934763#comment-14934763
 ] 

Fabian Hueske commented on FLINK-2168:
--

Hi, glad to hear that you would like to contribute to Flink!
I assigned the issue to you. 

Please let us know, if you have any questions. 
Looking forward to your contribution, Fabian

> Add fromHBase() to TableEnvironment
> ---
>
> Key: FLINK-2168
> URL: https://issues.apache.org/jira/browse/FLINK-2168
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Wilmer DAZA
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHBase()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from a HBase table.
> The implementation could reuse Flink's HBaseInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2169) Add fromParquet() to TableEnvironment

2015-09-29 Thread rerngvit yanggratoke (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934814#comment-14934814
 ] 

rerngvit yanggratoke commented on FLINK-2169:
-

[~aljoscha] Thanks for letting me know. I will look for something else then.

> Add fromParquet() to TableEnvironment
> -
>
> Key: FLINK-2169
> URL: https://issues.apache.org/jira/browse/FLINK-2169
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: starter
>
> Add a {{fromCsvFile()}} method to the {{TableEnvironment}} to read a 
> {{Table}} from a Parquet file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2168) Add fromHBase() to TableEnvironment

2015-09-29 Thread Wilmer DAZA (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934827#comment-14934827
 ] 

Wilmer DAZA commented on FLINK-2168:


[~aljoscha] thanks so much for the info ... I'll better do that.

> Add fromHBase() to TableEnvironment
> ---
>
> Key: FLINK-2168
> URL: https://issues.apache.org/jira/browse/FLINK-2168
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Wilmer DAZA
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHBase()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from a HBase table.
> The implementation could reuse Flink's HBaseInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2168) Add fromHBase() to TableEnvironment

2015-09-29 Thread Wilmer DAZA (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933174#comment-14933174
 ] 

Wilmer DAZA edited comment on FLINK-2168 at 9/29/15 6:53 AM:
-

Hello [~fhueske], I saw this as an starter issue, and I want to start to 
contribute to Apache Flink, I would love to take this issue and try to solve 
it. Thank you


was (Author: wilmer):
I saw this as an starter issue, and I want to start to contribute to Apache 
Flink, I would love to take this issue and try to solve it. Thank you

> Add fromHBase() to TableEnvironment
> ---
>
> Key: FLINK-2168
> URL: https://issues.apache.org/jira/browse/FLINK-2168
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHBase()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from a HBase table.
> The implementation could reuse Flink's HBaseInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2775) [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934846#comment-14934846
 ] 

ASF GitHub Bot commented on FLINK-2775:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1189#discussion_r40646086
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ClassLoaderUtil.java 
---
@@ -129,4 +129,11 @@ public static boolean 
validateClassLoadable(ClassNotFoundException cnfe, ClassLo
return false;
}
}
+
+   /**
+* No instantiation.
--- End diff --

different javadoc compared to other private constructors


> [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes
> --
>
> Key: FLINK-2775
> URL: https://issues.apache.org/jira/browse/FLINK-2775
> Project: Flink
>  Issue Type: Improvement
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> As part of the theme to help make the code more consistent, add cleanup to 
> Utils classes:
> -) Add final class modifier to the XXXUtils and XXXUtil classes to make sure 
> can not be extended.
> -) Add missing Javadoc header classs to some public classes.
> -) Add private constructor to Utils classes to avoid instantiation.
> -) Remove unused 
> test/java/org/apache/flink/test/recordJobs/util/ConfigUtils.java class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2775) [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934847#comment-14934847
 ] 

ASF GitHub Bot commented on FLINK-2775:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1189#discussion_r40646129
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/util/ClusterUtil.java
 ---
@@ -26,11 +26,17 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-
-public class ClusterUtil {
+/**
+ * Utility class to manage mini cluster for Apache Flink.
+ */
+public final class ClusterUtil {

private static final Logger LOG = 
LoggerFactory.getLogger(ClusterUtil.class);
 
+   private ClusterUtil() {
+   // Prevent instantiation.
--- End diff --

no javadoc, doesn't throw a runtime exception as other constructors.


> [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes
> --
>
> Key: FLINK-2775
> URL: https://issues.apache.org/jira/browse/FLINK-2775
> Project: Flink
>  Issue Type: Improvement
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> As part of the theme to help make the code more consistent, add cleanup to 
> Utils classes:
> -) Add final class modifier to the XXXUtils and XXXUtil classes to make sure 
> can not be extended.
> -) Add missing Javadoc header classs to some public classes.
> -) Add private constructor to Utils classes to avoid instantiation.
> -) Remove unused 
> test/java/org/apache/flink/test/recordJobs/util/ConfigUtils.java class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2169) Add fromParquet() to TableEnvironment

2015-09-29 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934804#comment-14934804
 ] 

Aljoscha Krettek commented on FLINK-2169:
-

There is currently an open PR (https://github.com/apache/flink/pull/1127) by 
[~twalthr] that adds {{fromHCat}} to TableEnvironment along with all the 
required plumbing. So maybe you should wait for him, or at least coordinate 
with him.

> Add fromParquet() to TableEnvironment
> -
>
> Key: FLINK-2169
> URL: https://issues.apache.org/jira/browse/FLINK-2169
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: starter
>
> Add a {{fromCsvFile()}} method to the {{TableEnvironment}} to read a 
> {{Table}} from a Parquet file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2168) Add fromHBase() to TableEnvironment

2015-09-29 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934803#comment-14934803
 ] 

Aljoscha Krettek edited comment on FLINK-2168 at 9/29/15 7:50 AM:
--

There is currently an open PR (https://github.com/apache/flink/pull/1127) by 
[~twalthr] that adds {{fromHCat}} to TableEnvironment along with all the 
required plumbing. So maybe you should wait for him, or at least coordinate 
with him.


was (Author: aljoscha):
There is currently an open pull request by [~twalthr] that adds {{fromHCat}} to 
TableEnvironment along with all the required plumbing. So maybe you should wait 
for him, or at least coordinate with him.

> Add fromHBase() to TableEnvironment
> ---
>
> Key: FLINK-2168
> URL: https://issues.apache.org/jira/browse/FLINK-2168
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Wilmer DAZA
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHBase()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from a HBase table.
> The implementation could reuse Flink's HBaseInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2168) Add fromHBase() to TableEnvironment

2015-09-29 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934803#comment-14934803
 ] 

Aljoscha Krettek commented on FLINK-2168:
-

There is currently an open pull request by [~twalthr] that adds {{fromHCat}} to 
TableEnvironment along with all the required plumbing. So maybe you should wait 
for him, or at least coordinate with him.

> Add fromHBase() to TableEnvironment
> ---
>
> Key: FLINK-2168
> URL: https://issues.apache.org/jira/browse/FLINK-2168
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Wilmer DAZA
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHBase()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from a HBase table.
> The implementation could reuse Flink's HBaseInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2777) The description of programming_guide.html has some issues

2015-09-29 Thread chenliang613 (JIRA)
chenliang613 created FLINK-2777:
---

 Summary: The description of programming_guide.html has some issues
 Key: FLINK-2777
 URL: https://issues.apache.org/jira/browse/FLINK-2777
 Project: Flink
  Issue Type: Bug
Reporter: chenliang613
Assignee: chenliang613
Priority: Minor


def createLocalEnvironment(parallelism: Int = 
Runtime.getRuntime.availableProcessors())) 
//issue1: In the end, there is a extra ")"

def createRemoteEnvironment(host: String, port: String, jarFiles: String*)
def createRemoteEnvironment(host: String, port: String, parallelism: Int, 
jarFiles: String*)
//issue2: the parameter of port should be "Int", not "String"





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2775] [CLEANUP] Cleanup code as part of...

2015-09-29 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1189#discussion_r40645927
  
--- Diff: 
flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/example/utils/ExampleUtils.java
 ---
@@ -117,7 +117,7 @@ public void configure(Configuration parameters) {
new FlatMapFunction>() {
@Override
public void flatMap(Long key,
-   Collector> out)
+   
Collector> out)
--- End diff --

formatting


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2775) [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934844#comment-14934844
 ] 

ASF GitHub Bot commented on FLINK-2775:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1189#discussion_r40645927
  
--- Diff: 
flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/example/utils/ExampleUtils.java
 ---
@@ -117,7 +117,7 @@ public void configure(Configuration parameters) {
new FlatMapFunction>() {
@Override
public void flatMap(Long key,
-   Collector> out)
+   
Collector> out)
--- End diff --

formatting


> [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes
> --
>
> Key: FLINK-2775
> URL: https://issues.apache.org/jira/browse/FLINK-2775
> Project: Flink
>  Issue Type: Improvement
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> As part of the theme to help make the code more consistent, add cleanup to 
> Utils classes:
> -) Add final class modifier to the XXXUtils and XXXUtil classes to make sure 
> can not be extended.
> -) Add missing Javadoc header classs to some public classes.
> -) Add private constructor to Utils classes to avoid instantiation.
> -) Remove unused 
> test/java/org/apache/flink/test/recordJobs/util/ConfigUtils.java class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2168) Add fromHBase() to TableEnvironment

2015-09-29 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-2168:
-
Assignee: Wilmer DAZA

> Add fromHBase() to TableEnvironment
> ---
>
> Key: FLINK-2168
> URL: https://issues.apache.org/jira/browse/FLINK-2168
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Wilmer DAZA
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHBase()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from a HBase table.
> The implementation could reuse Flink's HBaseInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2776) Print job id to standard out on CLI job submission

2015-09-29 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934809#comment-14934809
 ] 

Aljoscha Krettek commented on FLINK-2776:
-

+2

> Print job id to standard out on CLI job submission
> --
>
> Key: FLINK-2776
> URL: https://issues.apache.org/jira/browse/FLINK-2776
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 0.9, 0.10
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
> Fix For: 0.10, 0.9.2
>
>
> When executing Flink jobs, the job id is printed as part of the JobClient and 
> JobManager log messages. This information is available in the client log file 
> but not printed to standard out by default when using the CLI client.
> Some users have requested that the job identifier should be printed to 
> standard out such that is is available in detached execution mode. In 
> detached execution mode, the job id is important for querying the status of a 
> submitted job.
> We could ask users to change the log4-cli.properties settings. IMHO a better 
> solution would be to always print the job id to standard out for submitted 
> jobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2764) WebClient cannot display multiple Jobs

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934807#comment-14934807
 ] 

ASF GitHub Bot commented on FLINK-2764:
---

Github user aljoscha commented on the pull request:

https://github.com/apache/flink/pull/1181#issuecomment-143976787
  
Is this just a fix for the milestone-1 or the master?


> WebClient cannot display multiple Jobs
> --
>
> Key: FLINK-2764
> URL: https://issues.apache.org/jira/browse/FLINK-2764
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Blocker
>
> If multiple jars are uploaded to the WebClient, only a single job is listed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2764] [WebClient] [0.10.0-milestone-1] ...

2015-09-29 Thread aljoscha
Github user aljoscha commented on the pull request:

https://github.com/apache/flink/pull/1181#issuecomment-143976787
  
Is this just a fix for the milestone-1 or the master?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2775] [CLEANUP] Cleanup code as part of...

2015-09-29 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1189#discussion_r40646086
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ClassLoaderUtil.java 
---
@@ -129,4 +129,11 @@ public static boolean 
validateClassLoadable(ClassNotFoundException cnfe, ClassLo
return false;
}
}
+
+   /**
+* No instantiation.
--- End diff --

different javadoc compared to other private constructors


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2775] [CLEANUP] Cleanup code as part of...

2015-09-29 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1189#discussion_r40646129
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/util/ClusterUtil.java
 ---
@@ -26,11 +26,17 @@
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-
-public class ClusterUtil {
+/**
+ * Utility class to manage mini cluster for Apache Flink.
+ */
+public final class ClusterUtil {

private static final Logger LOG = 
LoggerFactory.getLogger(ClusterUtil.class);
 
+   private ClusterUtil() {
+   // Prevent instantiation.
--- End diff --

no javadoc, doesn't throw a runtime exception as other constructors.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-2777) The description of programming_guide.html has some issues

2015-09-29 Thread chenliang613 (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenliang613 updated FLINK-2777:

Labels:   (was: documentation)

> The description of programming_guide.html has some issues
> -
>
> Key: FLINK-2777
> URL: https://issues.apache.org/jira/browse/FLINK-2777
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: chenliang613
>Assignee: chenliang613
>Priority: Minor
> Fix For: 0.10
>
>
> def createLocalEnvironment(parallelism: Int = 
> Runtime.getRuntime.availableProcessors())) 
> //issue1: In the end, there is a extra ")"
> def createRemoteEnvironment(host: String, port: String, jarFiles: String*)
> def createRemoteEnvironment(host: String, port: String, parallelism: Int, 
> jarFiles: String*)
> //issue2: the parameter of port should be "Int", not "String"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2537) Add scala examples.jar to build-target/examples

2015-09-29 Thread chenliang613 (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenliang613 closed FLINK-2537.
---
Resolution: Not A Problem

> Add scala examples.jar to build-target/examples
> ---
>
> Key: FLINK-2537
> URL: https://issues.apache.org/jira/browse/FLINK-2537
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 0.10
>Reporter: chenliang613
>Assignee: chenliang613
>Priority: Minor
>  Labels: maven
> Fix For: 0.10
>
>
> Currently Scala as functional programming language has been acknowledged  by 
> more and more developers,  some starters may want to modify scala examples' 
> code for further understanding flink mechanism. After changing scala 
> code,they may select this method to check result: 
> 1.go to "build-target/bin" start server
> 2.use web UI to upload scala examples' jar
> 3.this time they would get confusion, why changes would be not updated.
> Because build-target/examples only copy java examples, suggest adding scala 
> examples also.
> The new directory would like this :
> build-target/examples/java
> build-target/examples/scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2777) The description of programming_guide.html has some issues

2015-09-29 Thread chenliang613 (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenliang613 updated FLINK-2777:

Component/s: Documentation

> The description of programming_guide.html has some issues
> -
>
> Key: FLINK-2777
> URL: https://issues.apache.org/jira/browse/FLINK-2777
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: chenliang613
>Assignee: chenliang613
>Priority: Minor
>  Labels: documentation
> Fix For: 0.10
>
>
> def createLocalEnvironment(parallelism: Int = 
> Runtime.getRuntime.availableProcessors())) 
> //issue1: In the end, there is a extra ")"
> def createRemoteEnvironment(host: String, port: String, jarFiles: String*)
> def createRemoteEnvironment(host: String, port: String, parallelism: Int, 
> jarFiles: String*)
> //issue2: the parameter of port should be "Int", not "String"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2777) The description of programming_guide.html has some issues

2015-09-29 Thread chenliang613 (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenliang613 updated FLINK-2777:

Fix Version/s: 0.10

> The description of programming_guide.html has some issues
> -
>
> Key: FLINK-2777
> URL: https://issues.apache.org/jira/browse/FLINK-2777
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: chenliang613
>Assignee: chenliang613
>Priority: Minor
>  Labels: documentation
> Fix For: 0.10
>
>
> def createLocalEnvironment(parallelism: Int = 
> Runtime.getRuntime.availableProcessors())) 
> //issue1: In the end, there is a extra ")"
> def createRemoteEnvironment(host: String, port: String, jarFiles: String*)
> def createRemoteEnvironment(host: String, port: String, parallelism: Int, 
> jarFiles: String*)
> //issue2: the parameter of port should be "Int", not "String"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2779) Update documentation to reflect new Stream/Window API

2015-09-29 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-2779:

Assignee: Kostas Tzoumas  (was: Aljoscha Krettek)

> Update documentation to reflect new Stream/Window API
> -
>
> Key: FLINK-2779
> URL: https://issues.apache.org/jira/browse/FLINK-2779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Kostas Tzoumas
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-2779) Update documentation to reflect new Stream/Window API

2015-09-29 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-2779:
---

Assignee: Aljoscha Krettek

> Update documentation to reflect new Stream/Window API
> -
>
> Key: FLINK-2779
> URL: https://issues.apache.org/jira/browse/FLINK-2779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2167) Add fromHCat() to TableEnvironment

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935204#comment-14935204
 ] 

ASF GitHub Bot commented on FLINK-2167:
---

Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/1127#discussion_r40674158
  
--- Diff: flink-staging/flink-table/pom.xml ---
@@ -37,7 +37,7 @@ under the License.

com.google.guava
guava
-   ${guava.version}
+   14.0.1
--- End diff --

What exactly was the problem with the hcat input format and guava?
If their guava version is really incompatible with ours, we can do the 
following:
- Use a newer hcat version
- create a special shaded-hcat maven module which shades away hcat's guava.


> Add fromHCat() to TableEnvironment
> --
>
> Key: FLINK-2167
> URL: https://issues.apache.org/jira/browse/FLINK-2167
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHCat()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from an HCatalog table.
> The implementation could reuse Flink's HCatInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2167] [table] Add fromHCat() to TableEn...

2015-09-29 Thread rmetzger
Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/flink/pull/1127#discussion_r40674158
  
--- Diff: flink-staging/flink-table/pom.xml ---
@@ -37,7 +37,7 @@ under the License.

com.google.guava
guava
-   ${guava.version}
+   14.0.1
--- End diff --

What exactly was the problem with the hcat input format and guava?
If their guava version is really incompatible with ours, we can do the 
following:
- Use a newer hcat version
- create a special shaded-hcat maven module which shades away hcat's guava.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2167) Add fromHCat() to TableEnvironment

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935205#comment-14935205
 ] 

ASF GitHub Bot commented on FLINK-2167:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/1127#discussion_r40674516
  
--- Diff: flink-staging/flink-table/pom.xml ---
@@ -37,7 +37,7 @@ under the License.

com.google.guava
guava
-   ${guava.version}
+   14.0.1
--- End diff --

Thanks so far, I will look into this issue again and will report.


> Add fromHCat() to TableEnvironment
> --
>
> Key: FLINK-2167
> URL: https://issues.apache.org/jira/browse/FLINK-2167
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API
>Affects Versions: 0.9
>Reporter: Fabian Hueske
>Assignee: Timo Walther
>Priority: Minor
>  Labels: starter
>
> Add a {{fromHCat()}} method to the {{TableEnvironment}} to read a {{Table}} 
> from an HCatalog table.
> The implementation could reuse Flink's HCatInputFormat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2167] [table] Add fromHCat() to TableEn...

2015-09-29 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/1127#discussion_r40674516
  
--- Diff: flink-staging/flink-table/pom.xml ---
@@ -37,7 +37,7 @@ under the License.

com.google.guava
guava
-   ${guava.version}
+   14.0.1
--- End diff --

Thanks so far, I will look into this issue again and will report.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1753) Add more tests for Kafka Connectors

2015-09-29 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935210#comment-14935210
 ] 

Stephan Ewen commented on FLINK-1753:
-

[~rmetzger] This one is done, correct?

> Add more tests for Kafka Connectors
> ---
>
> Key: FLINK-1753
> URL: https://issues.apache.org/jira/browse/FLINK-1753
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>
> The current {{KafkaITCase}} is only doing a single test.
> We need to refactor that test so that it brings up a Kafka/Zookeeper server 
> and than performs various tests:
> Tests to include:
> - A topology with non-string types MERGED IN 359b39c3
> - A topology with a custom Kafka Partitioning class MERGED IN 359b39c3
> - A topology testing the regular {{KafkaSource}}. MERGED IN 359b39c3
> - Kafka broker failure MERGED IN cb34e976
> - Test with large records (up to 30 MB) MERGED IN 354922be
> - Flink TaskManager failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2777) The description of programming_guide.html has some issues

2015-09-29 Thread chenliang613 (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenliang613 updated FLINK-2777:

Labels: documentation  (was: )

> The description of programming_guide.html has some issues
> -
>
> Key: FLINK-2777
> URL: https://issues.apache.org/jira/browse/FLINK-2777
> Project: Flink
>  Issue Type: Bug
>Reporter: chenliang613
>Assignee: chenliang613
>Priority: Minor
>  Labels: documentation
>
> def createLocalEnvironment(parallelism: Int = 
> Runtime.getRuntime.availableProcessors())) 
> //issue1: In the end, there is a extra ")"
> def createRemoteEnvironment(host: String, port: String, jarFiles: String*)
> def createRemoteEnvironment(host: String, port: String, parallelism: Int, 
> jarFiles: String*)
> //issue2: the parameter of port should be "Int", not "String"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-09-29 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935149#comment-14935149
 ] 

Stephan Ewen commented on FLINK-2763:
-

[~greghogan] Can you check whether the latest master solves your problem and 
close the issue if it does (reopen if it does not) ?

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:107)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at 

[jira] [Resolved] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2763.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via af477563eb1acaab74da1a508c7e5fa37339c206

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:107)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> 

[jira] [Resolved] (FLINK-1753) Add more tests for Kafka Connectors

2015-09-29 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-1753.
---
Resolution: Fixed

Yes, I'm closing the issue.

> Add more tests for Kafka Connectors
> ---
>
> Key: FLINK-1753
> URL: https://issues.apache.org/jira/browse/FLINK-1753
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.9
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>
> The current {{KafkaITCase}} is only doing a single test.
> We need to refactor that test so that it brings up a Kafka/Zookeeper server 
> and than performs various tests:
> Tests to include:
> - A topology with non-string types MERGED IN 359b39c3
> - A topology with a custom Kafka Partitioning class MERGED IN 359b39c3
> - A topology testing the regular {{KafkaSource}}. MERGED IN 359b39c3
> - Kafka broker failure MERGED IN cb34e976
> - Test with large records (up to 30 MB) MERGED IN 354922be
> - Flink TaskManager failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2769] [dashboard] Remove hard coded job...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1185#issuecomment-144012726
  
Merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2756) start/stop scripts fail in directories with spaces

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934957#comment-14934957
 ] 

ASF GitHub Bot commented on FLINK-2756:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1182#issuecomment-144014613
  
+1

Merging this...


> start/stop scripts fail in directories with spaces
> --
>
> Key: FLINK-2756
> URL: https://issues.apache.org/jira/browse/FLINK-2756
> Project: Flink
>  Issue Type: Bug
>  Components: Start-Stop Scripts
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 0.10
>
>
> The scripts to start and stop Flink (start/stop-local.sh, 
> start/stop-cluster.sh, start/stop*streaming.sh) fail if the {{bin}} folder 
> path contains spaces.
> This is working properly in Flink 0.9.1
> I assume that the problem is not with {{bin/config.sh}}, because 
> {{bin/start-webclient.sh}} is working fine. 
> Maybe something was broken by recent changes in {{bin/jobmanager.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2653) Enable object reuse in MergeIterator

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934966#comment-14934966
 ] 

ASF GitHub Bot commented on FLINK-2653:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1115#issuecomment-144015985
  
Merging this...


> Enable object reuse in MergeIterator
> 
>
> Key: FLINK-2653
> URL: https://issues.apache.org/jira/browse/FLINK-2653
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>
> MergeIterator currently discards given reusable objects and simply returns a 
> new object from the JVM heap. This inefficiency has a noticeable impact on 
> garbage collection and runtime overhead (~5% overall performance by my 
> measure).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2768) Wrong Java version requirements in "Quickstart: Scala API" page

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2768.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed in 8d2289ea6981598ab4d2817192b3f171aa9d414d

Thank you for the contribution!

> Wrong Java version requirements in "Quickstart: Scala API" page
> ---
>
> Key: FLINK-2768
> URL: https://issues.apache.org/jira/browse/FLINK-2768
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.10
>Reporter: Chiwan Park
>Assignee: rerngvit yanggratoke
>  Labels: starter
> Fix For: 0.10
>
>
> Since Flink 0.10, we dropped Java 6 support. But "[Quickstart: Scala 
> API|https://ci.apache.org/projects/flink/flink-docs-master/quickstart/scala_api_quickstart.html];
>  page says that Java 6 is one of minimum requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2653) Enable object reuse in MergeIterator

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2653.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via 0a8df6d513fa59d650ff875bdf3a1613d0f14af5

Thank you for the contribution!

> Enable object reuse in MergeIterator
> 
>
> Key: FLINK-2653
> URL: https://issues.apache.org/jira/browse/FLINK-2653
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 0.10
>
>
> MergeIterator currently discards given reusable objects and simply returns a 
> new object from the JVM heap. This inefficiency has a noticeable impact on 
> garbage collection and runtime overhead (~5% overall performance by my 
> measure).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2761) Prevent instantiation of new ExecutionEnvironments in the Scala Shell

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2761.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via 16afb8ec66a2a07733b9090bffe96af1e913bb63

Thank you for the contribution!

> Prevent instantiation of new ExecutionEnvironments in the Scala Shell
> -
>
> Key: FLINK-2761
> URL: https://issues.apache.org/jira/browse/FLINK-2761
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Sachin Goel
> Fix For: 0.10
>
>
> When someone mistakenly creates a new ExecutionEnvironment in the Scala 
> Shell, the programs don't work. The Scala Shell should prevent new 
> ExecutionEnvironment instantiations.
> That can be done by setting a context environment factory that throws an 
> error when attempting to create a new environment.
> See here for a user with that problem:
> http://stackoverflow.com/questions/32763052/flink-datasources-outputs-caused-an-error-could-not-read-the-user-code-wrappe/32765236#32765236



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2653) Enable object reuse in MergeIterator

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2653.
---

> Enable object reuse in MergeIterator
> 
>
> Key: FLINK-2653
> URL: https://issues.apache.org/jira/browse/FLINK-2653
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
> Fix For: 0.10
>
>
> MergeIterator currently discards given reusable objects and simply returns a 
> new object from the JVM heap. This inefficiency has a noticeable impact on 
> garbage collection and runtime overhead (~5% overall performance by my 
> measure).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2761) Prevent instantiation of new ExecutionEnvironments in the Scala Shell

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2761.
---

> Prevent instantiation of new ExecutionEnvironments in the Scala Shell
> -
>
> Key: FLINK-2761
> URL: https://issues.apache.org/jira/browse/FLINK-2761
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Sachin Goel
> Fix For: 0.10
>
>
> When someone mistakenly creates a new ExecutionEnvironment in the Scala 
> Shell, the programs don't work. The Scala Shell should prevent new 
> ExecutionEnvironment instantiations.
> That can be done by setting a context environment factory that throws an 
> error when attempting to create a new environment.
> See here for a user with that problem:
> http://stackoverflow.com/questions/32763052/flink-datasources-outputs-caused-an-error-could-not-read-the-user-code-wrappe/32765236#32765236



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2753] [streaming] [api breaking] Add fi...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1175#issuecomment-144015350
  
Subsumed by #1184 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2753) Add new window API to streaming API

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934960#comment-14934960
 ] 

ASF GitHub Bot commented on FLINK-2753:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1175#issuecomment-144015350
  
Subsumed by #1184 


> Add new window API to streaming API
> ---
>
> Key: FLINK-2753
> URL: https://issues.apache.org/jira/browse/FLINK-2753
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Aljoscha Krettek
> Fix For: 0.10
>
>
> The API integration should follow the API design as documented here:
> https://cwiki.apache.org/confluence/display/FLINK/Streams+and+Operations+on+Streams
> This issue needs:
>   - Add new {{KeyedWindowDataStream}}, created by calling{{window()}} on the 
> {{KeyedDataStream}}.
>   - Add a utility that converts Window Policies into concrete window 
> implementations. This is important to realize special case implementations 
> that do not directly use generic mechanisms implemented by window policies.
>   - Instantiating the operators (dedicated and generic) from the window 
> policies
>   - I will add a stub for the new Window Policy classes, based on the 
> existing policy classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2754] FixedLengthRecordSorter can not w...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1178#issuecomment-144015272
  
Merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2723) CopyableValue method to copy into new instance

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934962#comment-14934962
 ] 

ASF GitHub Bot commented on FLINK-2723:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1169#issuecomment-144015414
  
Merging this...


> CopyableValue method to copy into new instance
> --
>
> Key: FLINK-2723
> URL: https://issues.apache.org/jira/browse/FLINK-2723
> Project: Flink
>  Issue Type: New Feature
>  Components: Core
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> Provide a method for generic user-defined functions to clone a 
> {{CopyableValue}}. A common use case is a {{GroupReduceFunction}} that needs 
> to store multiple objects. With object reuse we need to make a deep copy and 
> with type erasure we cannot call new.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2778) Add API for non-parallel non-keyed Windows

2015-09-29 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-2778:
---

 Summary: Add API for non-parallel non-keyed Windows
 Key: FLINK-2778
 URL: https://issues.apache.org/jira/browse/FLINK-2778
 Project: Flink
  Issue Type: Sub-task
  Components: Streaming
Reporter: Aljoscha Krettek
Assignee: Aljoscha Krettek


This addresses the NonParallelWindowStream section in the design doc: 
https://cwiki.apache.org/confluence/display/FLINK/Streams+and+Operations+on+Streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2779) Update documentation to reflect new Stream/Window API

2015-09-29 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-2779:
---

 Summary: Update documentation to reflect new Stream/Window API
 Key: FLINK-2779
 URL: https://issues.apache.org/jira/browse/FLINK-2779
 Project: Flink
  Issue Type: Sub-task
Reporter: Aljoscha Krettek






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2723) CopyableValue method to copy into new instance

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2723.
---

> CopyableValue method to copy into new instance
> --
>
> Key: FLINK-2723
> URL: https://issues.apache.org/jira/browse/FLINK-2723
> Project: Flink
>  Issue Type: New Feature
>  Components: Core
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 0.10
>
>
> Provide a method for generic user-defined functions to clone a 
> {{CopyableValue}}. A common use case is a {{GroupReduceFunction}} that needs 
> to store multiple objects. With object reuse we need to make a deep copy and 
> with type erasure we cannot call new.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2754) FixedLengthRecordSorter can not write to output cross MemorySegments.

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2754.
---

> FixedLengthRecordSorter can not write to output cross MemorySegments.
> -
>
> Key: FLINK-2754
> URL: https://issues.apache.org/jira/browse/FLINK-2754
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
> Fix For: 0.10
>
>
> FixedLengthRecordSorter can not write to output cross MemorySegments, it 
> works well as it's only called to write a single record before. Should fix it 
> and add more unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2761) Prevent instantiation of new ExecutionEnvironments in the Scala Shell

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934967#comment-14934967
 ] 

ASF GitHub Bot commented on FLINK-2761:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1180#issuecomment-144016570
  
Looks good, will merge this!


> Prevent instantiation of new ExecutionEnvironments in the Scala Shell
> -
>
> Key: FLINK-2761
> URL: https://issues.apache.org/jira/browse/FLINK-2761
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Sachin Goel
>
> When someone mistakenly creates a new ExecutionEnvironment in the Scala 
> Shell, the programs don't work. The Scala Shell should prevent new 
> ExecutionEnvironment instantiations.
> That can be done by setting a context environment factory that throws an 
> error when attempting to create a new environment.
> See here for a user with that problem:
> http://stackoverflow.com/questions/32763052/flink-datasources-outputs-caused-an-error-could-not-read-the-user-code-wrappe/32765236#32765236



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2761][scala-shell]Prevent creation of n...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1180#issuecomment-144016570
  
Looks good, will merge this!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-7) [GitHub] Enable Range Partitioner

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-7:
-
Component/s: Distributed Runtime

> [GitHub] Enable Range Partitioner
> -
>
> Key: FLINK-7
> URL: https://issues.apache.org/jira/browse/FLINK-7
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Runtime
>Reporter: GitHub Import
> Fix For: pre-apache
>
>
> The range partitioner is currently disabled. We need to implement the 
> following aspects:
> 1) Distribution information, if available, must be propagated back together 
> with the ordering property.
> 2) A generic bucket lookup structure (currently specific to PactRecord).
> Tests to re-enable after fixing this issue:
>  - TeraSortITCase
>  - GlobalSortingITCase
>  - GlobalSortingMixedOrderITCase
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/7
> Created by: [StephanEwen|https://github.com/StephanEwen]
> Labels: core, enhancement, optimizer, 
> Milestone: Release 0.4
> Assignee: [fhueske|https://github.com/fhueske]
> Created at: Fri Apr 26 13:48:24 CEST 2013
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-7) [GitHub] Enable Range Partitioner

2015-09-29 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934979#comment-14934979
 ] 

Stephan Ewen commented on FLINK-7:
--

Good idea, +1

This is the issue with number 7 - which means it has been around for quite a 
while ;-)

> [GitHub] Enable Range Partitioner
> -
>
> Key: FLINK-7
> URL: https://issues.apache.org/jira/browse/FLINK-7
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Runtime
>Reporter: GitHub Import
> Fix For: pre-apache
>
>
> The range partitioner is currently disabled. We need to implement the 
> following aspects:
> 1) Distribution information, if available, must be propagated back together 
> with the ordering property.
> 2) A generic bucket lookup structure (currently specific to PactRecord).
> Tests to re-enable after fixing this issue:
>  - TeraSortITCase
>  - GlobalSortingITCase
>  - GlobalSortingMixedOrderITCase
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/7
> Created by: [StephanEwen|https://github.com/StephanEwen]
> Labels: core, enhancement, optimizer, 
> Milestone: Release 0.4
> Assignee: [fhueske|https://github.com/fhueske]
> Created at: Fri Apr 26 13:48:24 CEST 2013
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-7) [GitHub] Enable Range Partitioner

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-7:
-
Labels:   (was: github-import)

> [GitHub] Enable Range Partitioner
> -
>
> Key: FLINK-7
> URL: https://issues.apache.org/jira/browse/FLINK-7
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Runtime
>Reporter: GitHub Import
> Fix For: pre-apache
>
>
> The range partitioner is currently disabled. We need to implement the 
> following aspects:
> 1) Distribution information, if available, must be propagated back together 
> with the ordering property.
> 2) A generic bucket lookup structure (currently specific to PactRecord).
> Tests to re-enable after fixing this issue:
>  - TeraSortITCase
>  - GlobalSortingITCase
>  - GlobalSortingMixedOrderITCase
>  Imported from GitHub 
> Url: https://github.com/stratosphere/stratosphere/issues/7
> Created by: [StephanEwen|https://github.com/StephanEwen]
> Labels: core, enhancement, optimizer, 
> Milestone: Release 0.4
> Assignee: [fhueske|https://github.com/fhueske]
> Created at: Fri Apr 26 13:48:24 CEST 2013
> State: open



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2723) CopyableValue method to copy into new instance

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935058#comment-14935058
 ] 

ASF GitHub Bot commented on FLINK-2723:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1169


> CopyableValue method to copy into new instance
> --
>
> Key: FLINK-2723
> URL: https://issues.apache.org/jira/browse/FLINK-2723
> Project: Flink
>  Issue Type: New Feature
>  Components: Core
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> Provide a method for generic user-defined functions to clone a 
> {{CopyableValue}}. A common use case is a {{GroupReduceFunction}} that needs 
> to store multiple objects. With object reuse we need to make a deep copy and 
> with type erasure we cannot call new.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2761][scala-shell]Prevent creation of n...

2015-09-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1180


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2756) start/stop scripts fail in directories with spaces

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935056#comment-14935056
 ] 

ASF GitHub Bot commented on FLINK-2756:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1182


> start/stop scripts fail in directories with spaces
> --
>
> Key: FLINK-2756
> URL: https://issues.apache.org/jira/browse/FLINK-2756
> Project: Flink
>  Issue Type: Bug
>  Components: Start-Stop Scripts
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 0.10
>
>
> The scripts to start and stop Flink (start/stop-local.sh, 
> start/stop-cluster.sh, start/stop*streaming.sh) fail if the {{bin}} folder 
> path contains spaces.
> This is working properly in Flink 0.9.1
> I assume that the problem is not with {{bin/config.sh}}, because 
> {{bin/start-webclient.sh}} is working fine. 
> Maybe something was broken by recent changes in {{bin/jobmanager.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2761) Prevent instantiation of new ExecutionEnvironments in the Scala Shell

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935060#comment-14935060
 ] 

ASF GitHub Bot commented on FLINK-2761:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1180


> Prevent instantiation of new ExecutionEnvironments in the Scala Shell
> -
>
> Key: FLINK-2761
> URL: https://issues.apache.org/jira/browse/FLINK-2761
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Sachin Goel
>
> When someone mistakenly creates a new ExecutionEnvironment in the Scala 
> Shell, the programs don't work. The Scala Shell should prevent new 
> ExecutionEnvironment instantiations.
> That can be done by setting a context environment factory that throws an 
> error when attempting to create a new environment.
> See here for a user with that problem:
> http://stackoverflow.com/questions/32763052/flink-datasources-outputs-caused-an-error-could-not-read-the-user-code-wrappe/32765236#32765236



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2756] Fix start/stop scripts for paths ...

2015-09-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1182


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2723] CopyableValue method to copy into...

2015-09-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1169


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2653) Enable object reuse in MergeIterator

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935059#comment-14935059
 ] 

ASF GitHub Bot commented on FLINK-2653:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1115


> Enable object reuse in MergeIterator
> 
>
> Key: FLINK-2653
> URL: https://issues.apache.org/jira/browse/FLINK-2653
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>
> MergeIterator currently discards given reusable objects and simply returns a 
> new object from the JVM heap. This inefficiency has a noticeable impact on 
> garbage collection and runtime overhead (~5% overall performance by my 
> measure).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2754] FixedLengthRecordSorter can not w...

2015-09-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1178


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2754) FixedLengthRecordSorter can not write to output cross MemorySegments.

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935057#comment-14935057
 ] 

ASF GitHub Bot commented on FLINK-2754:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1178


> FixedLengthRecordSorter can not write to output cross MemorySegments.
> -
>
> Key: FLINK-2754
> URL: https://issues.apache.org/jira/browse/FLINK-2754
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
>
> FixedLengthRecordSorter can not write to output cross MemorySegments, it 
> works well as it's only called to write a single record before. Should fix it 
> and add more unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2769] [dashboard] Remove hard coded job...

2015-09-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1185


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2769) Web dashboard port not configurable on client side

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935055#comment-14935055
 ] 

ASF GitHub Bot commented on FLINK-2769:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1185


> Web dashboard port not configurable on client side
> --
>
> Key: FLINK-2769
> URL: https://issues.apache.org/jira/browse/FLINK-2769
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: master
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Critical
>
> The new web dashboard port configuration only affects the server side, but 
> not the client side.
> To reproduce set in {{flink-conf.yaml}}:
> {code}
> jobmanager.web.port: 9091
> jobmanager.new-web-frontend: true
> {code}
> Run
> {code}
> $ bin/start-cluster.sh
> {code}
> You can browse to {{http://localhost:9091}}, but you won't see any data from 
> the job manager. Requests to http://localhost:9091/overview etc. work as 
> expected, but the client side JavaScript is running requests against 
> {{http://localhost:8081}}.
> The problem is that 
> {{flink-runtime-web/web-dashboard/app/scripts/index.coffee}} hard codes:
> {code}
> .value 'flinkConfig', {
>   jobServer: 'http://localhost:8081'
> }
> {code} which is picked up by all generated scripts.
> This needs to be configurable for both standalone mode but especially for 
> recovery mode. This was a major pain point for me today, because I was 
> working on the dashboard and thought that my changes were wrong.
> In general, we need to add more tests to the new dashboard, which should soon 
> replace the old one imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [Flink-2768][Documentation] Fix wrong java ver...

2015-09-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1188


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2775] [CLEANUP] Cleanup code as part of...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1189#issuecomment-144011547
  
Looks good. @zentol's comments are nice-to-address issues.

+1 to merge with addressing the comments


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-2780) Remove Old Windowing Logic and API

2015-09-29 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-2780:
---

 Summary: Remove Old Windowing Logic and API
 Key: FLINK-2780
 URL: https://issues.apache.org/jira/browse/FLINK-2780
 Project: Flink
  Issue Type: Sub-task
  Components: Streaming
Reporter: Aljoscha Krettek


This might also require porting some tests but I think the new operators and 
API have good tests already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2768) Wrong Java version requirements in "Quickstart: Scala API" page

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2768.
---

> Wrong Java version requirements in "Quickstart: Scala API" page
> ---
>
> Key: FLINK-2768
> URL: https://issues.apache.org/jira/browse/FLINK-2768
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.10
>Reporter: Chiwan Park
>Assignee: rerngvit yanggratoke
>  Labels: starter
> Fix For: 0.10
>
>
> Since Flink 0.10, we dropped Java 6 support. But "[Quickstart: Scala 
> API|https://ci.apache.org/projects/flink/flink-docs-master/quickstart/scala_api_quickstart.html];
>  page says that Java 6 is one of minimum requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2764] [WebClient] [0.10.0-milestone-1] ...

2015-09-29 Thread mjsax
Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/1181#issuecomment-143995828
  
Both. I labeled it as milestone-1 because I opened this PR before it was 
decided that the next RC will be forked from master. So I would just merge in 
into master right now (and it will be included in milestone-1 automatically).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2764) WebClient cannot display multiple Jobs

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934885#comment-14934885
 ] 

ASF GitHub Bot commented on FLINK-2764:
---

Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/1181#issuecomment-143995828
  
Both. I labeled it as milestone-1 because I opened this PR before it was 
decided that the next RC will be forked from master. So I would just merge in 
into master right now (and it will be included in milestone-1 automatically).


> WebClient cannot display multiple Jobs
> --
>
> Key: FLINK-2764
> URL: https://issues.apache.org/jira/browse/FLINK-2764
> Project: Flink
>  Issue Type: Bug
>  Components: Web Client
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Blocker
>
> If multiple jars are uploaded to the WebClient, only a single job is listed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2775) [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934945#comment-14934945
 ] 

ASF GitHub Bot commented on FLINK-2775:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1189#issuecomment-144011547
  
Looks good. @zentol's comments are nice-to-address issues.

+1 to merge with addressing the comments


> [CLEANUP] Cleanup code as part of theme to be more consistent on Utils classes
> --
>
> Key: FLINK-2775
> URL: https://issues.apache.org/jira/browse/FLINK-2775
> Project: Flink
>  Issue Type: Improvement
>Reporter: Henry Saputra
>Assignee: Henry Saputra
>Priority: Minor
>
> As part of the theme to help make the code more consistent, add cleanup to 
> Utils classes:
> -) Add final class modifier to the XXXUtils and XXXUtil classes to make sure 
> can not be extended.
> -) Add missing Javadoc header classs to some public classes.
> -) Add private constructor to Utils classes to avoid instantiation.
> -) Remove unused 
> test/java/org/apache/flink/test/recordJobs/util/ConfigUtils.java class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [Flink-2768][Documentation] Fix wrong java ver...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1188#issuecomment-144011605
  
Will merge this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2769) Web dashboard port not configurable on client side

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934948#comment-14934948
 ] 

ASF GitHub Bot commented on FLINK-2769:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1185#issuecomment-144012726
  
Merging this...


> Web dashboard port not configurable on client side
> --
>
> Key: FLINK-2769
> URL: https://issues.apache.org/jira/browse/FLINK-2769
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: master
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Critical
>
> The new web dashboard port configuration only affects the server side, but 
> not the client side.
> To reproduce set in {{flink-conf.yaml}}:
> {code}
> jobmanager.web.port: 9091
> jobmanager.new-web-frontend: true
> {code}
> Run
> {code}
> $ bin/start-cluster.sh
> {code}
> You can browse to {{http://localhost:9091}}, but you won't see any data from 
> the job manager. Requests to http://localhost:9091/overview etc. work as 
> expected, but the client side JavaScript is running requests against 
> {{http://localhost:8081}}.
> The problem is that 
> {{flink-runtime-web/web-dashboard/app/scripts/index.coffee}} hard codes:
> {code}
> .value 'flinkConfig', {
>   jobServer: 'http://localhost:8081'
> }
> {code} which is picked up by all generated scripts.
> This needs to be configurable for both standalone mode but especially for 
> recovery mode. This was a major pain point for me today, because I was 
> working on the dashboard and thought that my changes were wrong.
> In general, we need to add more tests to the new dashboard, which should soon 
> replace the old one imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2756] Fix start/stop scripts for paths ...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1182#issuecomment-144014613
  
+1

Merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2727] [streaming] Add a base class for ...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1163#issuecomment-144015953
  
Any objection against merging this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2653] Enable object reuse in MergeItera...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1115#issuecomment-144015985
  
Merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-2756) start/stop scripts fail in directories with spaces

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2756.
-
Resolution: Fixed

Fixed via 40cbf7e4b038c945916fa8843b60bd4a59a4ae31

> start/stop scripts fail in directories with spaces
> --
>
> Key: FLINK-2756
> URL: https://issues.apache.org/jira/browse/FLINK-2756
> Project: Flink
>  Issue Type: Bug
>  Components: Start-Stop Scripts
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 0.10
>
>
> The scripts to start and stop Flink (start/stop-local.sh, 
> start/stop-cluster.sh, start/stop*streaming.sh) fail if the {{bin}} folder 
> path contains spaces.
> This is working properly in Flink 0.9.1
> I assume that the problem is not with {{bin/config.sh}}, because 
> {{bin/start-webclient.sh}} is working fine. 
> Maybe something was broken by recent changes in {{bin/jobmanager.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2754) FixedLengthRecordSorter can not write to output cross MemorySegments.

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2754.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via 68912126d73b92a07d15ec3f21f9ac922744fb45

Thank you for the contribution!

> FixedLengthRecordSorter can not write to output cross MemorySegments.
> -
>
> Key: FLINK-2754
> URL: https://issues.apache.org/jira/browse/FLINK-2754
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
> Fix For: 0.10
>
>
> FixedLengthRecordSorter can not write to output cross MemorySegments, it 
> works well as it's only called to write a single record before. Should fix it 
> and add more unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2756) start/stop scripts fail in directories with spaces

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2756.
---

> start/stop scripts fail in directories with spaces
> --
>
> Key: FLINK-2756
> URL: https://issues.apache.org/jira/browse/FLINK-2756
> Project: Flink
>  Issue Type: Bug
>  Components: Start-Stop Scripts
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 0.10
>
>
> The scripts to start and stop Flink (start/stop-local.sh, 
> start/stop-cluster.sh, start/stop*streaming.sh) fail if the {{bin}} folder 
> path contains spaces.
> This is working properly in Flink 0.9.1
> I assume that the problem is not with {{bin/config.sh}}, because 
> {{bin/start-webclient.sh}} is working fine. 
> Maybe something was broken by recent changes in {{bin/jobmanager.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-09-29 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935070#comment-14935070
 ] 

Stephan Ewen commented on FLINK-2763:
-

I have a fix coming up...

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:107)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> 

[jira] [Closed] (FLINK-2769) Web dashboard port not configurable on client side

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2769.
---

> Web dashboard port not configurable on client side
> --
>
> Key: FLINK-2769
> URL: https://issues.apache.org/jira/browse/FLINK-2769
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: master
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Critical
> Fix For: 0.10
>
>
> The new web dashboard port configuration only affects the server side, but 
> not the client side.
> To reproduce set in {{flink-conf.yaml}}:
> {code}
> jobmanager.web.port: 9091
> jobmanager.new-web-frontend: true
> {code}
> Run
> {code}
> $ bin/start-cluster.sh
> {code}
> You can browse to {{http://localhost:9091}}, but you won't see any data from 
> the job manager. Requests to http://localhost:9091/overview etc. work as 
> expected, but the client side JavaScript is running requests against 
> {{http://localhost:8081}}.
> The problem is that 
> {{flink-runtime-web/web-dashboard/app/scripts/index.coffee}} hard codes:
> {code}
> .value 'flinkConfig', {
>   jobServer: 'http://localhost:8081'
> }
> {code} which is picked up by all generated scripts.
> This needs to be configurable for both standalone mode but especially for 
> recovery mode. This was a major pain point for me today, because I was 
> working on the dashboard and thought that my changes were wrong.
> In general, we need to add more tests to the new dashboard, which should soon 
> replace the old one imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2769) Web dashboard port not configurable on client side

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2769.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via 3b8b4f0f8c0600dc851d676ce1bd7f5ab81cb64f

> Web dashboard port not configurable on client side
> --
>
> Key: FLINK-2769
> URL: https://issues.apache.org/jira/browse/FLINK-2769
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: master
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Critical
> Fix For: 0.10
>
>
> The new web dashboard port configuration only affects the server side, but 
> not the client side.
> To reproduce set in {{flink-conf.yaml}}:
> {code}
> jobmanager.web.port: 9091
> jobmanager.new-web-frontend: true
> {code}
> Run
> {code}
> $ bin/start-cluster.sh
> {code}
> You can browse to {{http://localhost:9091}}, but you won't see any data from 
> the job manager. Requests to http://localhost:9091/overview etc. work as 
> expected, but the client side JavaScript is running requests against 
> {{http://localhost:8081}}.
> The problem is that 
> {{flink-runtime-web/web-dashboard/app/scripts/index.coffee}} hard codes:
> {code}
> .value 'flinkConfig', {
>   jobServer: 'http://localhost:8081'
> }
> {code} which is picked up by all generated scripts.
> This needs to be configurable for both standalone mode but especially for 
> recovery mode. This was a major pain point for me today, because I was 
> working on the dashboard and thought that my changes were wrong.
> In general, we need to add more tests to the new dashboard, which should soon 
> replace the old one imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2723) CopyableValue method to copy into new instance

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2723.
-
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via e727355e42bd0ad7d403aee703aaf33a68a839d2

Thank you for the contribution!

> CopyableValue method to copy into new instance
> --
>
> Key: FLINK-2723
> URL: https://issues.apache.org/jira/browse/FLINK-2723
> Project: Flink
>  Issue Type: New Feature
>  Components: Core
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 0.10
>
>
> Provide a method for generic user-defined functions to clone a 
> {{CopyableValue}}. A common use case is a {{GroupReduceFunction}} that needs 
> to store multiple objects. With object reuse we need to make a deep copy and 
> with type erasure we cannot call new.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-2763:
---

Assignee: Stephan Ewen

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:107)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> 

[GitHub] flink pull request: [FLINK-2723] CopyableValue method to copy into...

2015-09-29 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1169#issuecomment-144015414
  
Merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2753] [streaming] [api breaking] Add fi...

2015-09-29 Thread StephanEwen
Github user StephanEwen closed the pull request at:

https://github.com/apache/flink/pull/1175


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2754) FixedLengthRecordSorter can not write to output cross MemorySegments.

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14934959#comment-14934959
 ] 

ASF GitHub Bot commented on FLINK-2754:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1178#issuecomment-144015272
  
Merging this...


> FixedLengthRecordSorter can not write to output cross MemorySegments.
> -
>
> Key: FLINK-2754
> URL: https://issues.apache.org/jira/browse/FLINK-2754
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Reporter: Chengxiang Li
>Assignee: Chengxiang Li
>
> FixedLengthRecordSorter can not write to output cross MemorySegments, it 
> works well as it's only called to write a single record before. Should fix it 
> and add more unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2780) Remove Old Windowing Logic and API

2015-09-29 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935073#comment-14935073
 ] 

Stephan Ewen commented on FLINK-2780:
-

I would be in favor of this. The old API had too many issues and the new API 
subsumes all of the safely doable behavior, plus is more stable.

> Remove Old Windowing Logic and API
> --
>
> Key: FLINK-2780
> URL: https://issues.apache.org/jira/browse/FLINK-2780
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming
>Reporter: Aljoscha Krettek
> Fix For: 0.10
>
>
> This might also require porting some tests but I think the new operators and 
> API have good tests already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-2766) Bad ipv6 urls

2015-09-29 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen reassigned FLINK-2766:
---

Assignee: Stephan Ewen

> Bad ipv6 urls
> -
>
> Key: FLINK-2766
> URL: https://issues.apache.org/jira/browse/FLINK-2766
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.9.1, 0.9.2
>Reporter: Arsenii Krasikov
>Assignee: Stephan Ewen
>Priority: Blocker
>
> There is error with ipv6 addresses in 
> flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/JobManager.scala:
> {code:java}
>   /**
>* Builds the akka actor path for the JobManager actor, given the socket 
> address
>* where the JobManager's actor system runs.
>*
>* @param address The address of the JobManager's actor system.
>* @return The akka URL of the JobManager actor.
>*/
>   def getRemoteJobManagerAkkaURL(address: InetSocketAddress): String = {
> val hostPort = address.getAddress().getHostAddress() + ":" + 
> address.getPort()
> s"akka.tcp://flink@$hostPort/user/$JOB_MANAGER_NAME"
>   }
> {code}
> that leads to 
> {code}
> 19:02:10,451 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- Trying to register at JobManager 
> akka.tcp://flink@2a02:6b8:0:1a39:0:0:12c:1:6123/user/jobmanager (attempt 31, 
> timeout: 30 seconds)
> 19:02:40,470 INFO  org.apache.flink.runtime.taskmanager.TaskManager   
>- Trying to register at JobManager 
> akka.tcp://flink@2a02:6b8:0:1a39:0:0:12c:1:6123/user/jobmanager (attempt 32, 
> timeout: 30 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2781) Consolidate NetUtils

2015-09-29 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-2781:
---

 Summary: Consolidate NetUtils
 Key: FLINK-2781
 URL: https://issues.apache.org/jira/browse/FLINK-2781
 Project: Flink
  Issue Type: Improvement
  Components: Core, Distributed Runtime
Affects Versions: 0.10
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 0.10


We currently have two classes called {{NetUtils}} with different networking 
related helpers. In some cases you need both classes, which results in quite 
some clumsiness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2781) Consolidate NetUtils

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935263#comment-14935263
 ] 

ASF GitHub Bot commented on FLINK-2781:
---

GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/1190

[FLINK-2781] [core] Cleanup NetUtils.

We currently have two classes called `NetUtils` with different networking 
related helpers. In some cases you need both classes, which results in quite 
some clumsiness.

This pull request changes it in the following way:
  - The `org.apache.flink.util.NetUtils` class (in `flink-core`) contains 
all methods usable without runtime dependencies.
  - The runtime NetUtils class (`org.apache.flink.runtime.net.NetUtils`, to 
find connecting addresses) is now called `ConnectionUtils`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink netutil

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1190.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1190


commit cc0ee16363c38990e04091725b53cf7b0565a0d2
Author: Stephan Ewen 
Date:   2015-09-29T14:36:04Z

[FLINK-2781] [core] Cleanup NetUtils.

  - The NetUtils class (in flink-core) contains all methods usable without 
runtime dependency
  - The runtime NetUtils class (to find connectiong addresses) is now 
called ConnectionUtil.




> Consolidate NetUtils
> 
>
> Key: FLINK-2781
> URL: https://issues.apache.org/jira/browse/FLINK-2781
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, Distributed Runtime
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> We currently have two classes called {{NetUtils}} with different networking 
> related helpers. In some cases you need both classes, which results in quite 
> some clumsiness.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2781] [core] Cleanup NetUtils.

2015-09-29 Thread StephanEwen
GitHub user StephanEwen opened a pull request:

https://github.com/apache/flink/pull/1190

[FLINK-2781] [core] Cleanup NetUtils.

We currently have two classes called `NetUtils` with different networking 
related helpers. In some cases you need both classes, which results in quite 
some clumsiness.

This pull request changes it in the following way:
  - The `org.apache.flink.util.NetUtils` class (in `flink-core`) contains 
all methods usable without runtime dependencies.
  - The runtime NetUtils class (`org.apache.flink.runtime.net.NetUtils`, to 
find connecting addresses) is now called `ConnectionUtils`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StephanEwen/incubator-flink netutil

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1190.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1190


commit cc0ee16363c38990e04091725b53cf7b0565a0d2
Author: Stephan Ewen 
Date:   2015-09-29T14:36:04Z

[FLINK-2781] [core] Cleanup NetUtils.

  - The NetUtils class (in flink-core) contains all methods usable without 
runtime dependency
  - The runtime NetUtils class (to find connectiong addresses) is now 
called ConnectionUtil.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2725] Max/Min/Sum Aggregation of mutabl...

2015-09-29 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/1191

[FLINK-2725] Max/Min/Sum Aggregation of mutable types



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
2725_max_min_sum_aggregation_of_mutable_types

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1191


commit c828dedb8ea05dbfaec06765e3effd7c95b13516
Author: Greg Hogan 
Date:   2015-09-22T17:01:47Z

[FLINK-2725] Max/Min/Sum Aggregation of mutable types




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-09-29 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-2763.
-

Success!

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:107)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> 

[jira] [Commented] (FLINK-2725) Max/Min/Sum Aggregation of mutable types

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935322#comment-14935322
 ] 

ASF GitHub Bot commented on FLINK-2725:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/1191

[FLINK-2725] Max/Min/Sum Aggregation of mutable types



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
2725_max_min_sum_aggregation_of_mutable_types

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1191


commit c828dedb8ea05dbfaec06765e3effd7c95b13516
Author: Greg Hogan 
Date:   2015-09-22T17:01:47Z

[FLINK-2725] Max/Min/Sum Aggregation of mutable types




> Max/Min/Sum Aggregation of mutable types
> 
>
> Key: FLINK-2725
> URL: https://issues.apache.org/jira/browse/FLINK-2725
> Project: Flink
>  Issue Type: Improvement
>  Components: Java API
>Affects Versions: master
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
>
> Support mutable value types in min, max, and sum aggregations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1984] Integrate Flink with Apache Mesos

2015-09-29 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/948#issuecomment-144074252
  
Thank you for the update @ankurcha. I'm trying to review your PR in the 
next few days.
Sorry for the delay.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1984) Integrate Flink with Apache Mesos

2015-09-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935229#comment-14935229
 ] 

ASF GitHub Bot commented on FLINK-1984:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/948#issuecomment-144074252
  
Thank you for the update @ankurcha. I'm trying to review your PR in the 
next few days.
Sorry for the delay.


> Integrate Flink with Apache Mesos
> -
>
> Key: FLINK-1984
> URL: https://issues.apache.org/jira/browse/FLINK-1984
> Project: Flink
>  Issue Type: New Feature
>  Components: New Components
>Reporter: Robert Metzger
>Priority: Minor
> Attachments: 251.patch
>
>
> There are some users asking for an integration of Flink into Mesos.
> There also is a pending pull request for adding Mesos support for Flink: 
> https://github.com/apache/flink/pull/251
> But the PR is insufficiently tested. I'll add the code of the pull request to 
> this JIRA in case somebody wants to pick it up in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2740) Create data consumer for Apache NiFi

2015-09-29 Thread Bryan Bende (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14935277#comment-14935277
 ] 

Bryan Bende commented on FLINK-2740:


Hi all, I'm going to start ramping up on flink streaming and start working a 
NiFiSource which will be part of a new flink-connectors-nifi module. 

I'll ask questions here as I start digging in, but so far the existing 
connectors look like great examples.

> Create data consumer for Apache NiFi
> 
>
> Key: FLINK-2740
> URL: https://issues.apache.org/jira/browse/FLINK-2740
> Project: Flink
>  Issue Type: New Feature
>  Components: Streaming Connectors
>Reporter: Kostas Tzoumas
>Assignee: Joseph Witt
>
> Create a connector to Apache NiFi to create Flink DataStreams from NiFi flows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2663] [gelly] Changes library methods t...

2015-09-29 Thread vasia
Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/1152#issuecomment-144117546
  
Travis has a failure on `YARNSessionFIFOITCase ` (FLINK-2392).
If no objections, I can merge this one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >