[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42106061
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
--- End diff --

Also, it might be useful to clarify that the edge direction is not 
considered.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-2857) Gelly API improvements

2015-10-15 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-2857:


 Summary: Gelly API improvements
 Key: FLINK-2857
 URL: https://issues.apache.org/jira/browse/FLINK-2857
 Project: Flink
  Issue Type: Improvement
  Components: Gelly
Reporter: Vasia Kalavri


During the Flink Forward Gelly School training, I got some really valuable 
feedback from participants on what they found hard to grasp or non-intuitive in 
the API. 

Based on that, I propose we make the following improvements:
-  rename the mapper in creation methods to {{VertexInitializer}}, so that its 
purpose is easier to understand.
- add a {{fromTuple2DataSet}} method to easily create graphs from {{Tuple2}} 
datasets, i.e. edges with no values.
- in {{joinWith*}} methods, it is hard to understand what are the parameters in 
the mapper and what will be the output. I suggest we flatten them, try to give 
intuitive names and improve the javadocs.
- in neighborhood methods, it is hard to understand what are the arguments of 
the {{EdgeFunction.iterateEdges}} and {{ReduceEdgesFunction.reduceEdges}}. 
Javadocs and parameter names could be improved here too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2856) Introduce flink.version property into quickstart archetype

2015-10-15 Thread Chiwan Park (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958845#comment-14958845
 ] 

Chiwan Park commented on FLINK-2856:


+1

> Introduce flink.version property into quickstart archetype
> --
>
> Key: FLINK-2856
> URL: https://issues.apache.org/jira/browse/FLINK-2856
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>
> With the quickstarts we're currently creating, users have to manually change 
> all the dependencies if they're changing the flink version.
> I propose to introduce a property for setting the flink version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2732) Add access to the TaskManagers' log file and out file in the web dashboard.

2015-10-15 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958601#comment-14958601
 ] 

Robert Metzger commented on FLINK-2732:
---

"The idea is not to have any communication between task managers" ? To 
implement this feature, there is no communication required between TMs.

I don't think that the three network hops [~sachingoel0101] is mentioning are a 
big issue. There are a lot of messages send back and forth between JM and TM 
all the time.
Some more arguments against Sachin's proposal:
- Running a web server on each TaskManager just for serving some log files also 
requires resources
- We can not assume that users can access the TaskManagers from their browsers. 
Remember that in many companies, there are firewalls in place, which allow only 
access to certain nodes. The web interface of Flink on YARN is also accessed 
through a proxy running on the resource manager. The RM will then load the 
resources from the AM/JM container. So we can not assume that we can establish 
a connection from client browser <---> TM container on YARN.
That's also the reason why we send the data of the {{DataSet.collect()}} 
through the JM.

> Add access to the TaskManagers' log file and out file in the web dashboard.
> ---
>
> Key: FLINK-2732
> URL: https://issues.apache.org/jira/browse/FLINK-2732
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Affects Versions: 0.10
>Reporter: Stephan Ewen
>Assignee: Martin Liesenberg
> Fix For: 0.10
>
>
> Add access to the TaskManagers' log file and out file in the web dashboard.
> This needs addition on the server side, as the log files need to be 
> transferred   to the JobManager via the blob server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-2856) Introduce flink.version property into quickstart archetype

2015-10-15 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-2856:
-

Assignee: Robert Metzger

> Introduce flink.version property into quickstart archetype
> --
>
> Key: FLINK-2856
> URL: https://issues.apache.org/jira/browse/FLINK-2856
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>
> With the quickstarts we're currently creating, users have to manually change 
> all the dependencies if they're changing the flink version.
> I propose to introduce a property for setting the flink version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2843) Add documentation for DataSet outer joins

2015-10-15 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-2843.

Resolution: Fixed

Fixed via d9e32da2631d519d434238b6153332ed03047461

> Add documentation for DataSet outer joins
> -
>
> Key: FLINK-2843
> URL: https://issues.apache.org/jira/browse/FLINK-2843
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
> Fix For: 0.10
>
>
> Outer joins are not included in the documentation yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2479) Refactoring of org.apache.flink.runtime.operators.testutils.TestData class

2015-10-15 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-2479.

   Resolution: Fixed
Fix Version/s: (was: pre-apache)
   0.10

Fixed via fbc18b96a86bc54da189f713ed01370524558249

> Refactoring of org.apache.flink.runtime.operators.testutils.TestData class
> --
>
> Key: FLINK-2479
> URL: https://issues.apache.org/jira/browse/FLINK-2479
> Project: Flink
>  Issue Type: Task
>  Components: Local Runtime
>Reporter: Ricky Pogalz
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 0.10
>
>
> Currently, there are still tests which use {{Record}} from the old record 
> API. One of the test util classes is {{TestData}}, including {{Generator}} 
> and some other classes still using {{Record}}. An alternative implementation 
> of the {{Generator}} without {{Record}} already exists in the {{TestData}} 
> class, namely {{TupleGenerator}}.
> Please replace the utility classes in {{TestData}} that still use {{Record}} 
> and adapt all of its usages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105854
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/example/utils/TriangleCountData.java
 ---
@@ -20,10 +20,12 @@
 
 import org.apache.flink.api.java.DataSet;
 import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.tuple.Tuple3;
 import org.apache.flink.graph.Edge;
 import org.apache.flink.types.NullValue;
 
 import java.util.ArrayList;
+import java.util.Arrays;
--- End diff --

unused import


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958714#comment-14958714
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/1250#issuecomment-148347416
  
Hi @ssaumitra,
thanks a lot for updating the PR! I have left a few minor comments that 
should be easy to fix.


> Port the Flink DataSet Triangle Enumeration example to the Gelly library
> 
>
> Key: FLINK-2714
> URL: https://issues.apache.org/jira/browse/FLINK-2714
> Project: Flink
>  Issue Type: Task
>  Components: Gelly
>Affects Versions: 0.10
> Environment: 
>Reporter: Andra Lungu
>Assignee: Saumitra Shahapure
>Priority: Trivial
>  Labels: newbie, starter
>
> Currently, the Gelly library contains one method for counting the number of 
> triangles in a graph: a gather-apply-scatter version. 
> This issue proposes the addition of a library method based on this Flink 
> example:
> https://github.com/apache/flink/blob/master/flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/graph/EnumTrianglesOpt.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2720) Add Storm-CountMetric in flink-stormcompatibility

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958817#comment-14958817
 ] 

ASF GitHub Bot commented on FLINK-2720:
---

Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/1157#issuecomment-148370496
  
Any progress on this?


> Add Storm-CountMetric in flink-stormcompatibility
> -
>
> Key: FLINK-2720
> URL: https://issues.apache.org/jira/browse/FLINK-2720
> Project: Flink
>  Issue Type: New Feature
>  Components: Storm Compatibility
>Reporter: Huang Wei
>Assignee: Huang Wei
> Fix For: 0.10
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Add the CountMetric for the first step of storm metrics:
> 1.Do a wrapper FlinkCountMetric for CountMetric
> 2.push the RuntimeContext in FlinkTopologyContext to use `addAccumulator` 
> method for registering the metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2843] Add documentation for outer joins...

2015-10-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1248


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2735) KafkaProducerITCase.testCustomPartitioning sporadically fails

2015-10-15 Thread Till Rohrmann (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958775#comment-14958775
 ] 

Till Rohrmann commented on FLINK-2735:
--

Here is another instance of the problem: 
https://s3.amazonaws.com/archive.travis-ci.org/jobs/85472091/log.txt

> KafkaProducerITCase.testCustomPartitioning sporadically fails
> -
>
> Key: FLINK-2735
> URL: https://issues.apache.org/jira/browse/FLINK-2735
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 0.10
>Reporter: Robert Metzger
>  Labels: test-stability
>
> In the following test run: 
> https://s3.amazonaws.com/archive.travis-ci.org/jobs/8158/log.txt
> there was the following failure
> {code}
> Caused by: java.lang.Exception: Unable to get last offset for topic 
> customPartitioningTestTopic and partitions [FetchPartition {partition=2, 
> offset=-915623761776}]. 
> Exception for partition 2: kafka.common.UnknownException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>   at java.lang.Class.newInstance(Class.java:438)
>   at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
>   at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.getLastOffset(LegacyFetcher.java:521)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:370)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:382)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:58)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:168)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:579)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Unable to get last offset for topic 
> customPartitioningTestTopic and partitions [FetchPartition {partition=2, 
> offset=-915623761776}]. 
> Exception for partition 2: kafka.common.UnknownException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>   at java.lang.Class.newInstance(Class.java:438)
>   at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
>   at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.getLastOffset(LegacyFetcher.java:521)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:370)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.getLastOffset(LegacyFetcher.java:524)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:370)
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 17.455 sec 
> <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerITCase
> testCustomPartitioning(org.apache.flink.streaming.connectors.kafka.KafkaProducerITCase)
>   Time elapsed: 7.809 sec  <<< FAILURE!
> java.lang.AssertionError: Test failed: The program execution failed: Job 
> execution failed.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.tryExecute(KafkaTestBase.java:313)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerITCase.testCustomPartitioning(KafkaProducerITCase.java:155)
> {code}
> From the broker logs it seems to be an issue in the Kafka broker
> {code}
> 14:43:03,328 INFO  kafka.network.Processor
>- Closing socket connection to /127.0.0.1.
> 14:43:03,334 WARN  kafka.server.KafkaApis 
>- [KafkaApi-0] 

[jira] [Assigned] (FLINK-2855) Add a documentation section for Gelly library methods

2015-10-15 Thread Vasia Kalavri (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasia Kalavri reassigned FLINK-2855:


Assignee: Vasia Kalavri

> Add a documentation section for Gelly library methods
> -
>
> Key: FLINK-2855
> URL: https://issues.apache.org/jira/browse/FLINK-2855
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>  Labels: easyfix, newbie
>
> We should add a separate documentation section for the Gelly library methods. 
> For each method, we should have an overview of the used algorithm, 
> implementation details and usage information.
> You can find an example of what these should look like is 
> [here|http://gellyschool.com/library.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2479) Refactoring of org.apache.flink.runtime.operators.testutils.TestData class

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958861#comment-14958861
 ] 

ASF GitHub Bot commented on FLINK-2479:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1160


> Refactoring of org.apache.flink.runtime.operators.testutils.TestData class
> --
>
> Key: FLINK-2479
> URL: https://issues.apache.org/jira/browse/FLINK-2479
> Project: Flink
>  Issue Type: Task
>  Components: Local Runtime
>Reporter: Ricky Pogalz
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: pre-apache
>
>
> Currently, there are still tests which use {{Record}} from the old record 
> API. One of the test util classes is {{TestData}}, including {{Generator}} 
> and some other classes still using {{Record}}. An alternative implementation 
> of the {{Generator}} without {{Record}} already exists in the {{TestData}} 
> class, namely {{TupleGenerator}}.
> Please replace the utility classes in {{TestData}} that still use {{Record}} 
> and adapt all of its usages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2843) Add documentation for DataSet outer joins

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958860#comment-14958860
 ] 

ASF GitHub Bot commented on FLINK-2843:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1248


> Add documentation for DataSet outer joins
> -
>
> Key: FLINK-2843
> URL: https://issues.apache.org/jira/browse/FLINK-2843
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
> Fix For: 0.10
>
>
> Outer joins are not included in the documentation yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2774) Import Java API classes automatically in Flink's Scala shell

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958862#comment-14958862
 ] 

ASF GitHub Bot commented on FLINK-2774:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1247


> Import Java API classes automatically in Flink's Scala shell
> 
>
> Key: FLINK-2774
> URL: https://issues.apache.org/jira/browse/FLINK-2774
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Reporter: Till Rohrmann
>Assignee: Chiwan Park
>Priority: Minor
>
> Flink's Scala API depends partially on Flink's Java API classes. For example, 
> the {{sortPartition}} method requires an {{Order}} enum value. Flink's Scala 
> shell, however, only imports the Scala API classes. Thus, if a user wants to 
> {{sortPartition}} in the Scala shell, he has to manually import the 
> {{org.apache.flink.api.common.operators.Order}} enumeration. In order to 
> improve the user experience. I propose to automatically import all Java API 
> classes in Flink's Scala shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2125) String delimiter for SocketTextStream

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958863#comment-14958863
 ] 

ASF GitHub Bot commented on FLINK-2125:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1077


> String delimiter for SocketTextStream
> -
>
> Key: FLINK-2125
> URL: https://issues.apache.org/jira/browse/FLINK-2125
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.9
>Reporter: Márton Balassi
>Priority: Minor
>  Labels: starter
>
> The SocketTextStreamFunction uses a character delimiter, despite other parts 
> of the API using String delimiter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2774] [scala shell] Import some Java an...

2015-10-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1247


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2125][streaming] Delimiter change from ...

2015-10-15 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1077


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-2774) Import Java API classes automatically in Flink's Scala shell

2015-10-15 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske resolved FLINK-2774.
--
   Resolution: Fixed
Fix Version/s: 0.10

Fixed via c82ebbfce0b11a4b4de3126fb02ccfdad80e0837

> Import Java API classes automatically in Flink's Scala shell
> 
>
> Key: FLINK-2774
> URL: https://issues.apache.org/jira/browse/FLINK-2774
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Reporter: Till Rohrmann
>Assignee: Chiwan Park
>Priority: Minor
> Fix For: 0.10
>
>
> Flink's Scala API depends partially on Flink's Java API classes. For example, 
> the {{sortPartition}} method requires an {{Order}} enum value. Flink's Scala 
> shell, however, only imports the Scala API classes. Thus, if a user wants to 
> {{sortPartition}} in the Scala shell, he has to manually import the 
> {{org.apache.flink.api.common.operators.Order}} enumeration. In order to 
> improve the user experience. I propose to automatically import all Java API 
> classes in Flink's Scala shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105949
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
--- End diff --

same here :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958679#comment-14958679
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105854
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/example/utils/TriangleCountData.java
 ---
@@ -20,10 +20,12 @@
 
 import org.apache.flink.api.java.DataSet;
 import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.tuple.Tuple3;
 import org.apache.flink.graph.Edge;
 import org.apache.flink.types.NullValue;
 
 import java.util.ArrayList;
+import java.util.Arrays;
--- End diff --

unused import


> Port the Flink DataSet Triangle Enumeration example to the Gelly library
> 
>
> Key: FLINK-2714
> URL: https://issues.apache.org/jira/browse/FLINK-2714
> Project: Flink
>  Issue Type: Task
>  Components: Gelly
>Affects Versions: 0.10
> Environment: 
>Reporter: Andra Lungu
>Assignee: Saumitra Shahapure
>Priority: Trivial
>  Labels: newbie, starter
>
> Currently, the Gelly library contains one method for counting the number of 
> triangles in a graph: a gather-apply-scatter version. 
> This issue proposes the addition of a library method based on this Flink 
> example:
> https://github.com/apache/flink/blob/master/flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/graph/EnumTrianglesOpt.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/1250#issuecomment-148347416
  
Hi @ssaumitra,
thanks a lot for updating the PR! I have left a few minor comments that 
should be easy to fix.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2844) Remove old web interface and default to the new one

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958807#comment-14958807
 ] 

ASF GitHub Bot commented on FLINK-2844:
---

Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148368860
  
Yes, #1222 is obsolete now.


> Remove old web interface and default to the new one
> ---
>
> Key: FLINK-2844
> URL: https://issues.apache.org/jira/browse/FLINK-2844
> Project: Flink
>  Issue Type: New Feature
>  Components: JobManager
>Affects Versions: 0.10
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105999
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
--- End diff --

It now returns the triangles themselves, right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105936
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
--- End diff --

unused import


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958684#comment-14958684
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105999
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
--- End diff --

It now returns the triangles themselves, right?


> Port the Flink DataSet Triangle Enumeration example to the Gelly library
> 
>
> Key: FLINK-2714
> URL: https://issues.apache.org/jira/browse/FLINK-2714
> Project: Flink
>  Issue Type: Task
>  Components: Gelly
>Affects Versions: 0.10
> Environment: 
>Reporter: Andra Lungu
>Assignee: Saumitra Shahapure
>Priority: Trivial
>  Labels: newbie, starter
>
> Currently, the Gelly library contains one method for counting the number of 
> triangles in a graph: a gather-apply-scatter version. 
> This issue proposes the addition of a library method based on this Flink 
> example:
> https://github.com/apache/flink/blob/master/flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/graph/EnumTrianglesOpt.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958681#comment-14958681
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105936
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
--- End diff --

unused import


> Port the Flink DataSet Triangle Enumeration example to the Gelly library
> 
>
> Key: FLINK-2714
> URL: https://issues.apache.org/jira/browse/FLINK-2714
> Project: Flink
>  Issue Type: Task
>  Components: Gelly
>Affects Versions: 0.10
> Environment: 
>Reporter: Andra Lungu
>Assignee: Saumitra Shahapure
>Priority: Trivial
>  Labels: newbie, starter
>
> Currently, the Gelly library contains one method for counting the number of 
> triangles in a graph: a gather-apply-scatter version. 
> This issue proposes the addition of a library method based on this Flink 
> example:
> https://github.com/apache/flink/blob/master/flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/graph/EnumTrianglesOpt.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958683#comment-14958683
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42105949
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
--- End diff --

same here :)


> Port the Flink DataSet Triangle Enumeration example to the Gelly library
> 
>
> Key: FLINK-2714
> URL: https://issues.apache.org/jira/browse/FLINK-2714
> Project: Flink
>  Issue Type: Task
>  Components: Gelly
>Affects Versions: 0.10
> Environment: 
>Reporter: Andra Lungu
>Assignee: Saumitra Shahapure
>Priority: Trivial
>  Labels: newbie, starter
>
> Currently, the Gelly library contains one method for counting the number of 
> triangles in a graph: a gather-apply-scatter version. 
> This issue proposes the addition of a library method based on this Flink 
> example:
> https://github.com/apache/flink/blob/master/flink-examples/flink-java-examples/src/main/java/org/apache/flink/examples/java/graph/EnumTrianglesOpt.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958705#comment-14958705
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42107077
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* Emits 

[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42107077
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* Emits for an edge the original edge and its switched version.
+*/
+   private static final class EdgeDuplicator implements 
FlatMapFunction, Edge> {
+
+   @Override
+   public void 

[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread ssaumitra
Github user ssaumitra commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42108353
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* Emits for an edge the original edge and its switched version.
+*/
+   private static final class EdgeDuplicator implements 
FlatMapFunction, Edge> {
+
+   @Override
+   public 

[jira] [Commented] (FLINK-2692) Untangle CsvInputFormat into PojoTypeCsvInputFormat and TupleTypeCsvInputFormat

2015-10-15 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958724#comment-14958724
 ] 

Chesnay Schepler commented on FLINK-2692:
-

I'm gonna take a stab at this one.

> Untangle CsvInputFormat into PojoTypeCsvInputFormat and 
> TupleTypeCsvInputFormat 
> 
>
> Key: FLINK-2692
> URL: https://issues.apache.org/jira/browse/FLINK-2692
> Project: Flink
>  Issue Type: Improvement
>Reporter: Till Rohrmann
>Priority: Minor
>
> The {{CsvInputFormat}} currently allows to return values as a {{Tuple}} or a 
> {{Pojo}} type. As a consequence, the processing logic, which has to work for 
> both types, is overly complex. For example, the {{CsvInputFormat}} contains 
> fields which are only used when a Pojo is returned. Moreover, the pojo field 
> information are constructed by calling setter methods which have to be called 
> in a very specific order, otherwise they fail. E.g. one first has to call 
> {{setFieldTypes}} before calling {{setOrderOfPOJOFields}}, otherwise the 
> number of fields might be different. Furthermore, some of the methods can 
> only be called if the return type is a {{Pojo}} type, because they expect 
> that a {{PojoTypeInfo}} is present.
> I think the {{CsvInputFormat}} should be refactored to make the code more 
> easily maintainable. I propose to split it up into a 
> {{PojoTypeCsvInputFormat}} and a {{TupleTypeCsvInputFormat}} which take all 
> the required information via their constructors instead of using the 
> {{setFields}} and {{setOrderOfPOJOFields}} approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-2692) Untangle CsvInputFormat into PojoTypeCsvInputFormat and TupleTypeCsvInputFormat

2015-10-15 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-2692:
---

Assignee: Chesnay Schepler

> Untangle CsvInputFormat into PojoTypeCsvInputFormat and 
> TupleTypeCsvInputFormat 
> 
>
> Key: FLINK-2692
> URL: https://issues.apache.org/jira/browse/FLINK-2692
> Project: Flink
>  Issue Type: Improvement
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> The {{CsvInputFormat}} currently allows to return values as a {{Tuple}} or a 
> {{Pojo}} type. As a consequence, the processing logic, which has to work for 
> both types, is overly complex. For example, the {{CsvInputFormat}} contains 
> fields which are only used when a Pojo is returned. Moreover, the pojo field 
> information are constructed by calling setter methods which have to be called 
> in a very specific order, otherwise they fail. E.g. one first has to call 
> {{setFieldTypes}} before calling {{setOrderOfPOJOFields}}, otherwise the 
> number of fields might be different. Furthermore, some of the methods can 
> only be called if the return type is a {{Pojo}} type, because they expect 
> that a {{PojoTypeInfo}} is present.
> I think the {{CsvInputFormat}} should be refactored to make the code more 
> easily maintainable. I propose to split it up into a 
> {{PojoTypeCsvInputFormat}} and a {{TupleTypeCsvInputFormat}} which take all 
> the required information via their constructors instead of using the 
> {{setFields}} and {{setOrderOfPOJOFields}} approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958723#comment-14958723
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user ssaumitra commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42108353
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* 

[jira] [Created] (FLINK-2856) Introduce flink.version property into quickstart archetype

2015-10-15 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-2856:
-

 Summary: Introduce flink.version property into quickstart archetype
 Key: FLINK-2856
 URL: https://issues.apache.org/jira/browse/FLINK-2856
 Project: Flink
  Issue Type: Improvement
  Components: Quickstarts
Reporter: Robert Metzger


With the quickstarts we're currently creating, users have to manually change 
all the dependencies if they're changing the flink version.

I propose to introduce a property for setting the flink version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2844) Remove old web interface and default to the new one

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958800#comment-14958800
 ] 

ASF GitHub Bot commented on FLINK-2844:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148367867
  
I suspect we can close https://github.com/apache/flink/pull/1222 then?


> Remove old web interface and default to the new one
> ---
>
> Key: FLINK-2844
> URL: https://issues.apache.org/jira/browse/FLINK-2844
> Project: Flink
>  Issue Type: New Feature
>  Components: JobManager
>Affects Versions: 0.10
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2844][jobmanager] remove old web interf...

2015-10-15 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148367867
  
I suspect we can close https://github.com/apache/flink/pull/1222 then?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2844][jobmanager] remove old web interf...

2015-10-15 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148368860
  
Yes, #1222 is obsolete now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2798] Serve static files for the new we...

2015-10-15 Thread rmetzger
Github user rmetzger closed the pull request at:

https://github.com/apache/flink/pull/1222


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2798] Serve static files for the new we...

2015-10-15 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1222#issuecomment-148369010
  
Max took the relevant changes from this PR into 
https://github.com/apache/flink/pull/1246. Closing ...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2798) Prepare new web dashboard for executing it on YARN

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958811#comment-14958811
 ] 

ASF GitHub Bot commented on FLINK-2798:
---

Github user rmetzger closed the pull request at:

https://github.com/apache/flink/pull/1222


> Prepare new web dashboard for executing it on YARN
> --
>
> Key: FLINK-2798
> URL: https://issues.apache.org/jira/browse/FLINK-2798
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend, YARN Client
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2798) Prepare new web dashboard for executing it on YARN

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958810#comment-14958810
 ] 

ASF GitHub Bot commented on FLINK-2798:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1222#issuecomment-148369010
  
Max took the relevant changes from this PR into 
https://github.com/apache/flink/pull/1246. Closing ...


> Prepare new web dashboard for executing it on YARN
> --
>
> Key: FLINK-2798
> URL: https://issues.apache.org/jira/browse/FLINK-2798
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend, YARN Client
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2855) Add a documentation section for Gelly library methods

2015-10-15 Thread Vasia Kalavri (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasia Kalavri updated FLINK-2855:
-
Description: 
We should add a separate documentation section for the Gelly library methods. 
For each method, we should have an overview of the used algorithm, 
implementation details and usage information.
You can find an example of what these should look like 
[here|http://gellyschool.com/library.html].

  was:
We should add a separate documentation section for the Gelly library methods. 
For each method, we should have an overview of the used algorithm, 
implementation details and usage information.
You can find an example of what these should look like is 
[here|http://gellyschool.com/library.html].


> Add a documentation section for Gelly library methods
> -
>
> Key: FLINK-2855
> URL: https://issues.apache.org/jira/browse/FLINK-2855
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>  Labels: easyfix, newbie
>
> We should add a separate documentation section for the Gelly library methods. 
> For each method, we should have an overview of the used algorithm, 
> implementation details and usage information.
> You can find an example of what these should look like 
> [here|http://gellyschool.com/library.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958708#comment-14958708
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42107165
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* Emits 

[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42107165
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* Emits for an edge the original edge and its switched version.
+*/
+   private static final class EdgeDuplicator implements 
FlatMapFunction, Edge> {
+
+   @Override
+   public void 

[jira] [Created] (FLINK-2855) Add a documentation section for Gelly library methods

2015-10-15 Thread Vasia Kalavri (JIRA)
Vasia Kalavri created FLINK-2855:


 Summary: Add a documentation section for Gelly library methods
 Key: FLINK-2855
 URL: https://issues.apache.org/jira/browse/FLINK-2855
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Gelly
Reporter: Vasia Kalavri


We should add a separate documentation section for the Gelly library methods. 
For each method, we should have an overview of the used algorithm, 
implementation details and usage information.
You can find an example of what these should look like is 
[here|http://gellyschool.com/library.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2714) Port the Flink DataSet Triangle Enumeration example to the Gelly library

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958767#comment-14958767
 ] 

ASF GitHub Bot commented on FLINK-2714:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42113501
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* Emits 

[GitHub] flink pull request: [FLINK-2714] [Gelly] Copy triangle counting lo...

2015-10-15 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1250#discussion_r42113501
  
--- Diff: 
flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/library/TriangleEnumerator.java
 ---
@@ -0,0 +1,347 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.graph.library;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.functions.ReduceFunction;
+import org.apache.flink.api.common.functions.FlatMapFunction;
+import org.apache.flink.api.common.functions.JoinFunction;
+import org.apache.flink.api.common.functions.GroupReduceFunction;
+import org.apache.flink.api.common.operators.Order;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.ExecutionEnvironment;
+import org.apache.flink.api.java.functions.FunctionAnnotation;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.tuple.Tuple4;
+import org.apache.flink.graph.Edge;
+import org.apache.flink.graph.Graph;
+import org.apache.flink.graph.GraphAlgorithm;
+import org.apache.flink.graph.example.utils.TriangleCountData;
+import org.apache.flink.types.NullValue;
+import org.apache.flink.util.Collector;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+
+/**
+ * This function returns number of triangles present in the input graph.
+ * A triangle consists of three edges that connect three vertices with 
each other.
+ * 
+ * 
+ * The basic algorithm works as follows:
+ * It groups all edges that share a common vertex and builds triads, i.e., 
triples of vertices
+ * that are connected by two edges. Finally, all triads are filtered for 
which no third edge exists
+ * that closes the triangle.
+ * 
+ * 
+ * For a group of n edges that share a common vertex, the number of 
built triads is quadratic ((n*(n-1))/2).
+ * Therefore, an optimization of the algorithm is to group edges on the 
vertex with the smaller output degree to
+ * reduce the number of triads.
+ * This implementation extends the basic algorithm by computing output 
degrees of edge vertices and
+ * grouping on edges on the vertex with the smaller degree.
+ */
+
+public class TriangleEnumerator, VV, EV> 
implements GraphAlgorithm>> {
+   @Override
+   public DataSet> run(Graph input) throws 
Exception {
+
+   DataSet> edges = input.getEdges();
+
+   // annotate edges with degrees
+   DataSet edgesWithDegrees = 
edges.flatMap(new EdgeDuplicator())
+   .groupBy(0).sortGroup(1, 
Order.ASCENDING).reduceGroup(new DegreeCounter())
+   .groupBy(EdgeWithDegrees.V1, 
EdgeWithDegrees.V2).reduce(new DegreeJoiner());
+
+   // project edges by degrees
+   DataSet> edgesByDegree = 
edgesWithDegrees.map(new EdgeByDegreeProjector());
+   // project edges by vertex id
+   DataSet> edgesById = edgesByDegree.map(new 
EdgeByIdProjector());
+
+   DataSet> triangles = edgesByDegree
+   // build triads
+   
.groupBy(EdgeWithDegrees.V1).sortGroup(EdgeWithDegrees.V2, Order.ASCENDING)
+   .reduceGroup(new TriadBuilder())
+   // filter triads
+   .join(edgesById).where(Triad.V2, 
Triad.V3).equalTo(0, 1).with(new TriadFilter());
+
+   return triangles;
+   }
+
+   /**
+* Emits for an edge the original edge and its switched version.
+*/
+   private static final class EdgeDuplicator implements 
FlatMapFunction, Edge> {
+
+   @Override
+   public void 

[GitHub] flink pull request: [FLINK-2844][jobmanager] remove old web interf...

2015-10-15 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148367359
  
An update from my side. I have

- adapted the `WebFrontendITCase`
- adapted the `YarnSessionFIFOITCase`
- removed the Jetty dependencies

Based on  #1222 I changed the serving of the static files to work with the 
fat jar and a temporary directory. This is the default way now and ensures that 
everything works standalone and on YARN.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2844) Remove old web interface and default to the new one

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958794#comment-14958794
 ] 

ASF GitHub Bot commented on FLINK-2844:
---

Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148367359
  
An update from my side. I have

- adapted the `WebFrontendITCase`
- adapted the `YarnSessionFIFOITCase`
- removed the Jetty dependencies

Based on  #1222 I changed the serving of the static files to work with the 
fat jar and a temporary directory. This is the default way now and ensures that 
everything works standalone and on YARN.


> Remove old web interface and default to the new one
> ---
>
> Key: FLINK-2844
> URL: https://issues.apache.org/jira/browse/FLINK-2844
> Project: Flink
>  Issue Type: New Feature
>  Components: JobManager
>Affects Versions: 0.10
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2720][storm-compatibility]Add Storm-Cou...

2015-10-15 Thread mjsax
Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/1157#issuecomment-148370496
  
Any progress on this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2479) Refactoring of org.apache.flink.runtime.operators.testutils.TestData class

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958541#comment-14958541
 ] 

ASF GitHub Bot commented on FLINK-2479:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1160#issuecomment-148323391
  
Thanks for the update @zentol!
I will merge this PR.


> Refactoring of org.apache.flink.runtime.operators.testutils.TestData class
> --
>
> Key: FLINK-2479
> URL: https://issues.apache.org/jira/browse/FLINK-2479
> Project: Flink
>  Issue Type: Task
>  Components: Local Runtime
>Reporter: Ricky Pogalz
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: pre-apache
>
>
> Currently, there are still tests which use {{Record}} from the old record 
> API. One of the test util classes is {{TestData}}, including {{Generator}} 
> and some other classes still using {{Record}}. An alternative implementation 
> of the {{Generator}} without {{Record}} already exists in the {{TestData}} 
> class, namely {{TupleGenerator}}.
> Please replace the utility classes in {{TestData}} that still use {{Record}} 
> and adapt all of its usages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2479] Refactor runtime.operators.* test...

2015-10-15 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1160#issuecomment-148323391
  
Thanks for the update @zentol!
I will merge this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-2770) KafkaITCase.testConcurrentProducerConsumerTopology fails

2015-10-15 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-2770.

Resolution: Duplicate

> KafkaITCase.testConcurrentProducerConsumerTopology fails
> 
>
> Key: FLINK-2770
> URL: https://issues.apache.org/jira/browse/FLINK-2770
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 0.10
>Reporter: Matthias J. Sax
>Priority: Critical
> Fix For: 0.10
>
>
> https://travis-ci.org/mjsax/flink/jobs/82308003
> {noformat}
> Running org.apache.flink.streaming.connectors.kafka.KafkaITCase
> 09/26/2015 17:52:50   Job execution switched to status RUNNING.
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> SCHEDULED 
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> DEPLOYING 
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> RUNNING 
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> FINISHED 
> 09/26/2015 17:52:50   Job execution switched to status FINISHED.
> 09/26/2015 17:52:50   Job execution switched to status RUNNING.
> 09/26/2015 17:52:50   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to SCHEDULED 
> 09/26/2015 17:52:50   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to DEPLOYING 
> 09/26/2015 17:52:50   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to RUNNING 
> 09/26/2015 17:52:51   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to FAILED 
> java.lang.Exception: Could not forward element to next operator
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:382)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:57)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:57)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:198)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:580)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Could not forward element to next 
> operator
>   at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:332)
>   at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:316)
>   at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:50)
>   at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:30)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$SourceOutput.collect(SourceStreamTask.java:106)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource$NonTimestampContext.collect(StreamSource.java:92)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:449)
> Caused by: java.lang.RuntimeException: Could not forward element to next 
> operator
>   at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:332)
>   at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:316)
>   at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:50)
>   at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:30)
>   at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:37)
>   at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:329)
>   ... 6 more
> Caused by: 
> org.apache.flink.streaming.connectors.kafka.testutils.SuccessException
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$7.flatMap(KafkaConsumerTestBase.java:931)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$7.flatMap(KafkaConsumerTestBase.java:911)
>   at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47)
>   at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:329)
>   ... 11 more
> 09/26/2015 17:52:51   Job execution switched to status FAILING.
> 09/26/2015 17:52:51   Job execution switched to status FAILED.
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 80.981 sec 
> <<< FAILURE! - in org.apache.flink.streaming.connectors.kafka.KafkaITCase
> 

[jira] [Created] (FLINK-2854) KafkaITCase.testOneSourceMultiplePartitions failed on Travis

2015-10-15 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-2854:


 Summary: KafkaITCase.testOneSourceMultiplePartitions failed on 
Travis
 Key: FLINK-2854
 URL: https://issues.apache.org/jira/browse/FLINK-2854
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 0.10
Reporter: Till Rohrmann
Priority: Critical


The {{KafkaITCase.testOneSourceMultiplePartitions}} failed on Travis with no 
output for 300s.

https://s3.amazonaws.com/archive.travis-ci.org/jobs/85472083/log.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2125][streaming] Delimiter change from ...

2015-10-15 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1077#issuecomment-148333755
  
OK, I will close this PR for now. 
Please reopen if you would like to continue to work on this issue.
Thank you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2125) String delimiter for SocketTextStream

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958578#comment-14958578
 ] 

ASF GitHub Bot commented on FLINK-2125:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1077#issuecomment-148333755
  
OK, I will close this PR for now. 
Please reopen if you would like to continue to work on this issue.
Thank you.


> String delimiter for SocketTextStream
> -
>
> Key: FLINK-2125
> URL: https://issues.apache.org/jira/browse/FLINK-2125
> Project: Flink
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 0.9
>Reporter: Márton Balassi
>Priority: Minor
>  Labels: starter
>
> The SocketTextStreamFunction uses a character delimiter, despite other parts 
> of the API using String delimiter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-10-15 Thread Stefano Bortoli (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958485#comment-14958485
 ] 

Stefano Bortoli edited comment on FLINK-2763 at 10/15/15 8:17 AM:
--

Kryo serialization/deserialization is not threadsafe. In fact it is a good idea 
to use a pool of Kryo objects in general. However, it would mean that the same 
KryoSerializer is used concurrently, which I am not sure is supposed to happen. 
Fortunately, the twitter library used to instantiate the Kryo() offers pooled 
serialization/deserialization methods embedding borrow() and release() 
executors. It should be sufficient to extend the default pool to implement the 
KryoSerializer initialization, and then use that one. I will give it a try and 
report.


was (Author: stefano.bortoli):
Kryo serialization/deserialization is not threadsafe. In fact it is a good idea 
to use a pool of Kryo objects in general. However, it would mean that the same 
KryoSerializer is used concurrently, which I am not sure is supposed to happen. 
Fortunately, the twitter library used to instantiate the Kryo() offers pooled 
serialization/deserialization methods embedding borrow() and return() 
executors. It should be sufficient to extend the default pool to implement the 
KryoSerializer initialization, and then use that one. I will give it a try and 
report.

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: 0.10
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 

[jira] [Commented] (FLINK-2808) Rework / Extend the StatehandleProvider

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958538#comment-14958538
 ] 

ASF GitHub Bot commented on FLINK-2808:
---

Github user gyfora commented on a diff in the pull request:

https://github.com/apache/flink/pull/1239#discussion_r42098258
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/state/AbstractHeapKvState.java
 ---
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.state;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.memory.DataOutputView;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Base class for key/value state implementations that are backed by a 
regular heap hash map. The
+ * concrete implementations define how the state is checkpointed.
+ * 
+ * @param  The type of the key.
+ * @param  The type of the value.
+ * @param  The type of the backend that snapshots this key/value 
state.
+ */
+public abstract class AbstractHeapKvState implements KvState {
+
+   /** Map containing the actual key/value pairs */
+   private final HashMap state;
+   
+   /** The serializer for the keys */
+   private final TypeSerializer keySerializer;
+
+   /** The serializer for the values */
+   private final TypeSerializer valueSerializer;
+   
+   /** The value that is returned when no other value has been associated 
with a key, yet */
+   private final V defaultValue;
+   
+   /** The current key, which the next value methods will refer to */
+   private K currentKey;
+   
+   /**
+* Creates a new empty key/value state.
+* 
+* @param keySerializer The serializer for the keys.
+* @param valueSerializer The serializer for the values.
+* @param defaultValue The value that is returned when no other value 
has been associated with a key, yet.
+*/
+   protected AbstractHeapKvState(TypeSerializer keySerializer,
+   
TypeSerializer valueSerializer,
+   V 
defaultValue) {
+   this(keySerializer, valueSerializer, defaultValue, new 
HashMap());
+   }
+
+   /**
+* Creates a new key/value state for the given hash map of key/value 
pairs.
+* 
+* @param keySerializer The serializer for the keys.
+* @param valueSerializer The serializer for the values.
+* @param defaultValue The value that is returned when no other value 
has been associated with a key, yet.
+* @param state The state map to use in this kev/value state. May 
contain initial state.   
+*/
+   protected AbstractHeapKvState(TypeSerializer keySerializer,
+   
TypeSerializer valueSerializer,
+   V 
defaultValue,
+   
HashMap state) {
+   this.state = requireNonNull(state);
+   this.keySerializer = requireNonNull(keySerializer);
+   this.valueSerializer = requireNonNull(valueSerializer);
+   this.defaultValue = defaultValue;
+   }
+
+   // 

+   
+   @Override
+   public V value() {
+   V value = state.get(currentKey);
+   return value != null ? value : defaultValue;
--- End diff --

I think you should make a copy of the default value here. Otherwise you end 
up with the same objects for 

[GitHub] flink pull request: [FLINK-2808] Rework state abstraction and clea...

2015-10-15 Thread gyfora
Github user gyfora commented on a diff in the pull request:

https://github.com/apache/flink/pull/1239#discussion_r42098258
  
--- Diff: 
flink-staging/flink-streaming/flink-streaming-core/src/main/java/org/apache/flink/streaming/api/state/AbstractHeapKvState.java
 ---
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.state;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.memory.DataOutputView;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Base class for key/value state implementations that are backed by a 
regular heap hash map. The
+ * concrete implementations define how the state is checkpointed.
+ * 
+ * @param  The type of the key.
+ * @param  The type of the value.
+ * @param  The type of the backend that snapshots this key/value 
state.
+ */
+public abstract class AbstractHeapKvState implements KvState {
+
+   /** Map containing the actual key/value pairs */
+   private final HashMap state;
+   
+   /** The serializer for the keys */
+   private final TypeSerializer keySerializer;
+
+   /** The serializer for the values */
+   private final TypeSerializer valueSerializer;
+   
+   /** The value that is returned when no other value has been associated 
with a key, yet */
+   private final V defaultValue;
+   
+   /** The current key, which the next value methods will refer to */
+   private K currentKey;
+   
+   /**
+* Creates a new empty key/value state.
+* 
+* @param keySerializer The serializer for the keys.
+* @param valueSerializer The serializer for the values.
+* @param defaultValue The value that is returned when no other value 
has been associated with a key, yet.
+*/
+   protected AbstractHeapKvState(TypeSerializer keySerializer,
+   
TypeSerializer valueSerializer,
+   V 
defaultValue) {
+   this(keySerializer, valueSerializer, defaultValue, new 
HashMap());
+   }
+
+   /**
+* Creates a new key/value state for the given hash map of key/value 
pairs.
+* 
+* @param keySerializer The serializer for the keys.
+* @param valueSerializer The serializer for the values.
+* @param defaultValue The value that is returned when no other value 
has been associated with a key, yet.
+* @param state The state map to use in this kev/value state. May 
contain initial state.   
+*/
+   protected AbstractHeapKvState(TypeSerializer keySerializer,
+   
TypeSerializer valueSerializer,
+   V 
defaultValue,
+   
HashMap state) {
+   this.state = requireNonNull(state);
+   this.keySerializer = requireNonNull(keySerializer);
+   this.valueSerializer = requireNonNull(valueSerializer);
+   this.defaultValue = defaultValue;
+   }
+
+   // 

+   
+   @Override
+   public V value() {
+   V value = state.get(currentKey);
+   return value != null ? value : defaultValue;
--- End diff --

I think you should make a copy of the default value here. Otherwise you end 
up with the same objects for non primitive types.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not 

[GitHub] flink pull request: [FLINK-2774] [scala shell] Import some Java an...

2015-10-15 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1247#issuecomment-148325444
  
Will try and merge this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2843] Add documentation for outer joins...

2015-10-15 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1248#issuecomment-148328036
  
Thank you for documenting the feature. +1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2774) Import Java API classes automatically in Flink's Scala shell

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958552#comment-14958552
 ] 

ASF GitHub Bot commented on FLINK-2774:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1247#issuecomment-148325444
  
Will try and merge this PR.


> Import Java API classes automatically in Flink's Scala shell
> 
>
> Key: FLINK-2774
> URL: https://issues.apache.org/jira/browse/FLINK-2774
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Reporter: Till Rohrmann
>Assignee: Chiwan Park
>Priority: Minor
>
> Flink's Scala API depends partially on Flink's Java API classes. For example, 
> the {{sortPartition}} method requires an {{Order}} enum value. Flink's Scala 
> shell, however, only imports the Scala API classes. Thus, if a user wants to 
> {{sortPartition}} in the Scala shell, he has to manually import the 
> {{org.apache.flink.api.common.operators.Order}} enumeration. In order to 
> improve the user experience. I propose to automatically import all Java API 
> classes in Flink's Scala shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-10-15 Thread Flavio Pompermaier (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958474#comment-14958474
 ] 

Flavio Pompermaier commented on FLINK-2763:
---

I think that this bug could be related to FLINK-2800 and a bad implementation 
of the KryoSerializer..both output and kryo could be not thread safe somehow?

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: 0.10
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:107)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>   at 

[jira] [Commented] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-10-15 Thread Stefano Bortoli (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958485#comment-14958485
 ] 

Stefano Bortoli commented on FLINK-2763:


Kryo serialization/deserialization is not threadsafe. In fact it is a good idea 
to use a pool of Kryo objects in general. However, it would mean that the same 
KryoSerializer is used concurrently, which I am not sure is supposed to happen. 
However, the twitter library used to instantiate the Kryo() offers pooled 
serialization/deserialization methods embedding borrow() and return() 
executors. It should be sufficient to extend the default pool to implement the 
KryoSerializer initialization, and then use that one. I will give it a try and 
report.

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: 0.10
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>   at 
> 

[jira] [Comment Edited] (FLINK-2763) Bug in Hybrid Hash Join: Request to spill a partition with less than two buffers.

2015-10-15 Thread Stefano Bortoli (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958485#comment-14958485
 ] 

Stefano Bortoli edited comment on FLINK-2763 at 10/15/15 8:00 AM:
--

Kryo serialization/deserialization is not threadsafe. In fact it is a good idea 
to use a pool of Kryo objects in general. However, it would mean that the same 
KryoSerializer is used concurrently, which I am not sure is supposed to happen. 
Fortunately, the twitter library used to instantiate the Kryo() offers pooled 
serialization/deserialization methods embedding borrow() and return() 
executors. It should be sufficient to extend the default pool to implement the 
KryoSerializer initialization, and then use that one. I will give it a try and 
report.


was (Author: stefano.bortoli):
Kryo serialization/deserialization is not threadsafe. In fact it is a good idea 
to use a pool of Kryo objects in general. However, it would mean that the same 
KryoSerializer is used concurrently, which I am not sure is supposed to happen. 
However, the twitter library used to instantiate the Kryo() offers pooled 
serialization/deserialization methods embedding borrow() and return() 
executors. It should be sufficient to extend the default pool to implement the 
KryoSerializer initialization, and then use that one. I will give it a try and 
report.

> Bug in Hybrid Hash Join: Request to spill a partition with less than two 
> buffers.
> -
>
> Key: FLINK-2763
> URL: https://issues.apache.org/jira/browse/FLINK-2763
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Runtime
>Affects Versions: 0.10
>Reporter: Greg Hogan
>Assignee: Stephan Ewen
> Fix For: 0.10
>
>
> The following exception is thrown when running the example triangle listing 
> with an unmodified master build (4cadc3d6).
> {noformat}
> ./bin/flink run 
> ~/flink-examples/flink-java-examples/target/flink-java-examples-0.10-SNAPSHOT-EnumTrianglesOpt.jar
>  ~/rmat/undirected/s19_e8.ssv output
> {noformat}
> The only changes to {{flink-conf.yaml}} are {{taskmanager.numberOfTaskSlots: 
> 8}} and {{parallelism.default: 8}}.
> I have confirmed with input files 
> [s19_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxR2lnMHR4amdyTnM/view?usp=sharing]
>  (40 MB) and 
> [s20_e8.ssv|https://drive.google.com/file/d/0B6TrSsnHj2HxNi1HbmptU29MTm8/view?usp=sharing]
>  (83 MB). On a second machine only the larger file caused the exception.
> {noformat}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:386)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:353)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:64)
>   at 
> org.apache.flink.examples.java.graph.EnumTrianglesOpt.main(EnumTrianglesOpt.java:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:434)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:350)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:290)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:675)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:324)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:977)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1027)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:425)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> 

[jira] [Commented] (FLINK-2843) Add documentation for DataSet outer joins

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958568#comment-14958568
 ] 

ASF GitHub Bot commented on FLINK-2843:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1248#issuecomment-148328187
  
Thanks for the review. Will merge this PR.


> Add documentation for DataSet outer joins
> -
>
> Key: FLINK-2843
> URL: https://issues.apache.org/jira/browse/FLINK-2843
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
> Fix For: 0.10
>
>
> Outer joins are not included in the documentation yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2843] Add documentation for outer joins...

2015-10-15 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1248#issuecomment-148328187
  
Thanks for the review. Will merge this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2843) Add documentation for DataSet outer joins

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958564#comment-14958564
 ] 

ASF GitHub Bot commented on FLINK-2843:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/1248#issuecomment-148328036
  
Thank you for documenting the feature. +1 to merge


> Add documentation for DataSet outer joins
> -
>
> Key: FLINK-2843
> URL: https://issues.apache.org/jira/browse/FLINK-2843
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 0.10
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
> Fix For: 0.10
>
>
> Outer joins are not included in the documentation yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2853) Apply JMH on MutableHashTablePerformanceBenchmark class.

2015-10-15 Thread GaoLun (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GaoLun updated FLINK-2853:
--
Summary: Apply JMH on MutableHashTablePerformanceBenchmark class.  (was: 
Apply JMH on Flink benchmarks)

> Apply JMH on MutableHashTablePerformanceBenchmark class.
> 
>
> Key: FLINK-2853
> URL: https://issues.apache.org/jira/browse/FLINK-2853
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: GaoLun
>Assignee: GaoLun
>Priority: Minor
>  Labels: easyfix
>
> JMH is a Java harness for building, running, and analysing 
> nano/micro/milli/macro benchmarks.Use JMH to replace the old micro benchmarks 
> method in order to get much more accurate results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2774] [scala shell] Import some Java an...

2015-10-15 Thread fhueske
Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1247#issuecomment-148325123
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2774) Import Java API classes automatically in Flink's Scala shell

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958549#comment-14958549
 ] 

ASF GitHub Bot commented on FLINK-2774:
---

Github user fhueske commented on the pull request:

https://github.com/apache/flink/pull/1247#issuecomment-148325123
  
LGTM


> Import Java API classes automatically in Flink's Scala shell
> 
>
> Key: FLINK-2774
> URL: https://issues.apache.org/jira/browse/FLINK-2774
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Reporter: Till Rohrmann
>Assignee: Chiwan Park
>Priority: Minor
>
> Flink's Scala API depends partially on Flink's Java API classes. For example, 
> the {{sortPartition}} method requires an {{Order}} enum value. Flink's Scala 
> shell, however, only imports the Scala API classes. Thus, if a user wants to 
> {{sortPartition}} in the Scala shell, he has to manually import the 
> {{org.apache.flink.api.common.operators.Order}} enumeration. In order to 
> improve the user experience. I propose to automatically import all Java API 
> classes in Flink's Scala shell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958887#comment-14958887
 ] 

Robert Metzger commented on FLINK-2858:
---

The problem is that we're Flink is using the "scala-2.11" property to activate 
and deactivate the scala-2.10 and scala-2.11 profiles.
In flink-scala/pom.xml there is a scala-2.10 profile which activates itself 
even if the scala-2.11 profile is selected in IntelliJ.
I am not aware of a way of setting maven properties in IntelliJ.

I think [~aalexandrov] did it like this because we need to remove a dependency 
(scala macros) for scala-2.11.
We can probably also activate the profile with the regular profile activation 
and activate the scala-2.10 profile with "activateByDefault". 

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1984) Integrate Flink with Apache Mesos

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958984#comment-14958984
 ] 

ASF GitHub Bot commented on FLINK-1984:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/948#issuecomment-148403603
  
I tried running the code from this pull request again, this time using the 
`mesos-playa` vagrant image, and it does not work for me.
I was following your instructions.

When did you test the changes recently?
My motivation to test this pull request goes down every time I'm testing 
it. I've spun up a Mesos cluster on GCE two times, plus the VM now.
Maybe I'm doing it wrong, please let me know what I can do to get it to run.

CLI output:
```
vagrant@mesos:~/flink/build-target$ java 
-Dlog4j.configuration=file://`pwd`/conf/log4j.properties -Dlog.file=logs.log 
-cp lib/flink-dist-0.10-SNAPSHOT.jar 
org.apache.flink.mesos.scheduler.FlinkScheduler --confDir conf/
I1015 14:05:01.591161  9992 sched.cpp:157] Version: 0.22.1
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@712: Client 
environment:zookeeper.version=zookeeper C client 3.4.5
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@716: Client 
environment:host.name=mesos
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@723: Client 
environment:os.name=Linux
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@724: Client 
environment:os.arch=3.16.0-30-generic
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@725: Client 
environment:os.version=#40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@733: Client 
environment:user.name=vagrant
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@741: Client 
environment:user.home=/home/vagrant
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@753: Client 
environment:user.dir=/home/vagrant/flink/flink-dist/target/flink-0.10-SNAPSHOT-bin/flink-0.10-SNAPSHOT
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@zookeeper_init@786: 
Initiating client connection, host=127.0.0.1:2181 sessionTimeout=1 
watcher=0x7f67dac33a60 sessionId=0 sessionPasswd= context=0x7f67f0004470 
flags=0
2015-10-15 14:05:01,592:9991(0x7f67c6ffd700):ZOO_INFO@check_events@1703: 
initiated connection to server [127.0.0.1:2181]
Embedded server listening at
  http://127.0.0.1:40815
Press any key to stop.
2015-10-15 14:05:04,959:9991(0x7f67c6ffd700):ZOO_INFO@check_events@1750: 
session establishment complete on server [127.0.0.1:2181], 
sessionId=0x1506b6312fa000b, negotiated timeout=1
I1015 14:05:04.959841 10024 group.cpp:313] Group process 
(group(1)@127.0.1.1:57437) connected to ZooKeeper
I1015 14:05:04.959899 10024 group.cpp:790] Syncing group operations: queue 
size (joins, cancels, datas) = (0, 0, 0)
I1015 14:05:04.959928 10024 group.cpp:385] Trying to create path '/mesos' 
in ZooKeeper
I1015 14:05:05.204282 10024 detector.cpp:138] Detected a new leader: 
(id='2')
I1015 14:05:05.204489 10024 group.cpp:659] Trying to get 
'/mesos/info_02' in ZooKeeper
I1015 14:05:05.303072 10024 detector.cpp:452] A new leading master 
(UPID=master@127.0.1.1:5050) is detected
I1015 14:05:05.303467 10024 sched.cpp:254] New master detected at 
master@127.0.1.1:5050
I1015 14:05:05.303890 10024 sched.cpp:264] No credentials provided. 
Attempting to register without authentication
I1015 14:05:05.851562 10024 sched.cpp:448] Framework registered with 
20151015-120419-16842879-5050-1244-
```

log file content
```
14:04:54,564 WARN  org.apache.hadoop.util.NativeCodeLoader  
 - Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 - 

14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Starting JobManager (Version: 0.10-SNAPSHOT, Rev:d905af0, 
Date:06.10.2015 @ 19:37:22 UTC)
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Current user: vagrant
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.7/24.79-b02
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Maximum heap size: 592 MiBytes
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  JAVA_HOME: (not set)
14:04:55,823 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Hadoop version: 2.3.0
14:04:55,824 INFO

[GitHub] flink pull request: [FLINK-1984] Integrate Flink with Apache Mesos

2015-10-15 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/948#issuecomment-148403603
  
I tried running the code from this pull request again, this time using the 
`mesos-playa` vagrant image, and it does not work for me.
I was following your instructions.

When did you test the changes recently?
My motivation to test this pull request goes down every time I'm testing 
it. I've spun up a Mesos cluster on GCE two times, plus the VM now.
Maybe I'm doing it wrong, please let me know what I can do to get it to run.

CLI output:
```
vagrant@mesos:~/flink/build-target$ java 
-Dlog4j.configuration=file://`pwd`/conf/log4j.properties -Dlog.file=logs.log 
-cp lib/flink-dist-0.10-SNAPSHOT.jar 
org.apache.flink.mesos.scheduler.FlinkScheduler --confDir conf/
I1015 14:05:01.591161  9992 sched.cpp:157] Version: 0.22.1
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@712: Client 
environment:zookeeper.version=zookeeper C client 3.4.5
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@716: Client 
environment:host.name=mesos
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@723: Client 
environment:os.name=Linux
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@724: Client 
environment:os.arch=3.16.0-30-generic
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@725: Client 
environment:os.version=#40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@733: Client 
environment:user.name=vagrant
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@741: Client 
environment:user.home=/home/vagrant
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@log_env@753: Client 
environment:user.dir=/home/vagrant/flink/flink-dist/target/flink-0.10-SNAPSHOT-bin/flink-0.10-SNAPSHOT
2015-10-15 14:05:01,592:9991(0x7f67c700):ZOO_INFO@zookeeper_init@786: 
Initiating client connection, host=127.0.0.1:2181 sessionTimeout=1 
watcher=0x7f67dac33a60 sessionId=0 sessionPasswd= context=0x7f67f0004470 
flags=0
2015-10-15 14:05:01,592:9991(0x7f67c6ffd700):ZOO_INFO@check_events@1703: 
initiated connection to server [127.0.0.1:2181]
Embedded server listening at
  http://127.0.0.1:40815
Press any key to stop.
2015-10-15 14:05:04,959:9991(0x7f67c6ffd700):ZOO_INFO@check_events@1750: 
session establishment complete on server [127.0.0.1:2181], 
sessionId=0x1506b6312fa000b, negotiated timeout=1
I1015 14:05:04.959841 10024 group.cpp:313] Group process 
(group(1)@127.0.1.1:57437) connected to ZooKeeper
I1015 14:05:04.959899 10024 group.cpp:790] Syncing group operations: queue 
size (joins, cancels, datas) = (0, 0, 0)
I1015 14:05:04.959928 10024 group.cpp:385] Trying to create path '/mesos' 
in ZooKeeper
I1015 14:05:05.204282 10024 detector.cpp:138] Detected a new leader: 
(id='2')
I1015 14:05:05.204489 10024 group.cpp:659] Trying to get 
'/mesos/info_02' in ZooKeeper
I1015 14:05:05.303072 10024 detector.cpp:452] A new leading master 
(UPID=master@127.0.1.1:5050) is detected
I1015 14:05:05.303467 10024 sched.cpp:254] New master detected at 
master@127.0.1.1:5050
I1015 14:05:05.303890 10024 sched.cpp:264] No credentials provided. 
Attempting to register without authentication
I1015 14:05:05.851562 10024 sched.cpp:448] Framework registered with 
20151015-120419-16842879-5050-1244-
```

log file content
```
14:04:54,564 WARN  org.apache.hadoop.util.NativeCodeLoader  
 - Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 - 

14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Starting JobManager (Version: 0.10-SNAPSHOT, Rev:d905af0, 
Date:06.10.2015 @ 19:37:22 UTC)
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Current user: vagrant
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.7/24.79-b02
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Maximum heap size: 592 MiBytes
14:04:55,763 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  JAVA_HOME: (not set)
14:04:55,823 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  Hadoop version: 2.3.0
14:04:55,824 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 -  JVM Options:
14:04:55,824 INFO  org.apache.flink.mesos.scheduler.FlinkScheduler$ 
 - 
-Dlog4j.configuration=file:///home/vagrant/flink/build-target/conf/log4j.properties
14:04:55,824 INFO

[GitHub] flink pull request: [FLINK-2855] [gelly] Add documentation for the...

2015-10-15 Thread vasia
GitHub user vasia opened a pull request:

https://github.com/apache/flink/pull/1258

[FLINK-2855] [gelly] Add documentation for the Gelly library methods

This PR adds detailed documentation about the available Gelly library 
methods and improves the javadocs of the library methods constructors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vasia/flink lib-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1258.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1258


commit 95e095507f72828106688c37e6704dd9a7344866
Author: vasia 
Date:   2015-10-15T14:49:19Z

[FLINK-2855] [gelly] Add documentation for the Gelly library algorithms and 
improved javadocs for the library constructors.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2107) Implement Hash Outer Join algorithm

2015-10-15 Thread Chiwan Park (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959066#comment-14959066
 ] 

Chiwan Park commented on FLINK-2107:


[~fhueske] Yes, you can take this issue. :-) Sorry for delaying.

> Implement Hash Outer Join algorithm
> ---
>
> Key: FLINK-2107
> URL: https://issues.apache.org/jira/browse/FLINK-2107
> Project: Flink
>  Issue Type: New Feature
>  Components: Local Runtime
>Reporter: Fabian Hueske
>Assignee: Chiwan Park
>Priority: Minor
> Fix For: pre-apache
>
>
> Flink does not natively support outer joins at the moment.
> This issue proposes to implement a hash outer join algorithm that can cover 
> left and right outer joins.
> The implementation can be based on the regular hash join iterators (for 
> example `ReusingBuildFirstHashMatchIterator` and 
> `NonReusingBuildFirstHashMatchIterator`, see also `MatchDriver` class)
> The Reusing and NonReusing variants differ in whether object instances are 
> reused or new objects are created. I would start with the NonReusing variant 
> which is safer from a user's point of view and should also be easier to 
> implement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2844) Remove old web interface and default to the new one

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959047#comment-14959047
 ] 

ASF GitHub Bot commented on FLINK-2844:
---

Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148418322
  
Build passes (`ScalaShellLocalStartupITCase` has some hickups).


> Remove old web interface and default to the new one
> ---
>
> Key: FLINK-2844
> URL: https://issues.apache.org/jira/browse/FLINK-2844
> Project: Flink
>  Issue Type: New Feature
>  Components: JobManager
>Affects Versions: 0.10
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
> Fix For: 0.10
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2844][jobmanager] remove old web interf...

2015-10-15 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1246#issuecomment-148418322
  
Build passes (`ScalaShellLocalStartupITCase` has some hickups).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2107) Implement Hash Outer Join algorithm

2015-10-15 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959088#comment-14959088
 ] 

Fabian Hueske commented on FLINK-2107:
--

No problem. :-)
I will not solve all aspects, just the "easy" special cases. 
For the remaining cases, we need a special hash table implementation that 
allows to read all records from the hash table that have not been accessed 
during the probe phase.

If you are interested, you can continue to work on these cases.

> Implement Hash Outer Join algorithm
> ---
>
> Key: FLINK-2107
> URL: https://issues.apache.org/jira/browse/FLINK-2107
> Project: Flink
>  Issue Type: New Feature
>  Components: Local Runtime
>Reporter: Fabian Hueske
>Assignee: Chiwan Park
>Priority: Minor
> Fix For: pre-apache
>
>
> Flink does not natively support outer joins at the moment.
> This issue proposes to implement a hash outer join algorithm that can cover 
> left and right outer joins.
> The implementation can be based on the regular hash join iterators (for 
> example `ReusingBuildFirstHashMatchIterator` and 
> `NonReusingBuildFirstHashMatchIterator`, see also `MatchDriver` class)
> The Reusing and NonReusing variants differ in whether object instances are 
> reused or new objects are created. I would start with the NonReusing variant 
> which is safer from a user's point of view and should also be easier to 
> implement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2800) kryo serialization problem

2015-10-15 Thread Stefano Bortoli (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958995#comment-14958995
 ] 

Stefano Bortoli edited comment on FLINK-2800 at 10/15/15 3:42 PM:
--

Hi guys. I have played around the KryoSerialization class. I have upgraded to 
Kryo 3.0.2 which offers a very convenient KryoPool with customizable 
KryoFactory, and then I moved in the method the instantiation of all the fields 
that were shared by serialization and deserialization method. I have also 
registered the classes (sometimes Kryo does not serialize the classes in the 
same order if you don't register them, and can cause problems). I still have to 
evaluate whether all of these changes are necessary, but with the kryoPool, 
registering the classes, and moving the Input and Output fields in the methods 
solved the exceptions above. I will keep on investigating.

Anyway, this shows that it was actually a race condition, perhaps because the 
KryoSerializer that is not cloned as expected along the execution chain.

[EDIT] I have also set the default instantiator strategy to support 
serialization of objects that do not have constructor with no arguments

{code}
Kryo kryo = new ScalaKryoInstantiator().newKryo();
final DefaultInstantiatorStrategy instantiatorStrategy = new 
DefaultInstantiatorStrategy();
instantiatorStrategy.setFallbackInstantiatorStrategy(new 
StdInstantiatorStrategy());
kryo.setInstantiatorStrategy(instantiatorStrategy);
{code}


was (Author: stefano.bortoli):
Hi guys. I have played around the KryoSerialization class. I have upgraded to 
Kryo 3.0.2 which offers a very convenient KryoPool with customizable 
KryoFactory, and then I moved in the method the instantiation of all the fields 
that were shared by serialization and deserialization method. I have also 
registered the classes (sometimes Kryo does not serialize the classes in the 
same order if you don't register them, and can cause problems). I still have to 
evaluate whether all of these changes are necessary, but with the kryoPool, 
registering the classes, and moving the Input and Output fields in the methods 
solved the exceptions above. I will keep on investigating.

Anyway, this shows that it was actually a race condition, perhaps because the 
KryoSerializer that is not cloned as expected along the execution chain.

> kryo serialization problem
> --
>
> Key: FLINK-2800
> URL: https://issues.apache.org/jira/browse/FLINK-2800
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 0.10
> Environment: linux ubuntu 12.04 LTS, Java 7
>Reporter: Stefano Bortoli
>
> Performing a cross of two dataset of POJOs I have got the exception below. 
> The first time I run the process, there was no problem. When I run it the 
> second time, I have got the exception. My guess is that it could be a race 
> condition related to the reuse of the Kryo serializer object. However, it 
> could also be "a bug where type registrations are not properly forwarded to 
> all Serializers", as suggested by Stephan.
> 
> 2015-10-01 18:18:21 INFO  JobClient:161 - 10/01/2015 18:18:21 Cross(Cross at 
> main(FlinkMongoHadoop2LinkPOI2CDA.java:160))(3/4) switched to FAILED 
> com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 
> 114
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:210)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:127)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:30)
>   at 
> org.apache.flink.runtime.operators.resettable.AbstractBlockResettableIterator.getNextRecord(AbstractBlockResettableIterator.java:180)
>   at 
> org.apache.flink.runtime.operators.resettable.BlockResettableMutableObjectIterator.next(BlockResettableMutableObjectIterator.java:111)
>   at 
> org.apache.flink.runtime.operators.CrossDriver.runBlockedOuterSecond(CrossDriver.java:309)
>   at 
> org.apache.flink.runtime.operators.CrossDriver.run(CrossDriver.java:162)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:489)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:354)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by 

[jira] [Commented] (FLINK-2800) kryo serialization problem

2015-10-15 Thread Stefano Bortoli (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959143#comment-14959143
 ] 

Stefano Bortoli commented on FLINK-2800:


Ok, I have run a couple of tests more, and the process was completed without 
the KryoPool. However, when I reverted back to the global variable for Input 
and Output, I could reproduce the error. 

{quote}
10/15/2015 18:05:59 Job execution switched to status FAILED.
2015-10-15 18:05:59 INFO  JobManager:137 - Status of job 
1b05e39a7ea019d0a57702eb2a06d64a (Flink Java Job at Thu Oct 15 18:03:33 CEST 
2015) changed to FAILED.
com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 115
at 
com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:137)
at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:667)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:778)
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:153)
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:787)
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)
at 
com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:787)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:251)
at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:127)
at 
org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:1)
at 
org.apache.flink.runtime.operators.resettable.AbstractBlockResettableIterator.getNextRecord(AbstractBlockResettableIterator.java:180)
at 
org.apache.flink.runtime.operators.resettable.BlockResettableMutableObjectIterator.next(BlockResettableMutableObjectIterator.java:111)
at 
org.apache.flink.runtime.operators.CrossDriver.runBlockedOuterSecond(CrossDriver.java:309)
at 
org.apache.flink.runtime.operators.CrossDriver.run(CrossDriver.java:162)
at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:489)
at 
org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:354)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584)
at java.lang.Thread.run(Thread.java:745)
2015-10-15 18:05:59 INFO  JobClient:200 - Job execution failed
{quote}

This version of the method does not work:
{code}
@SuppressWarnings("unchecked")
@Override
public T deserialize(DataInputView source) throws IOException {
if (source != previousIn) {
previousIn = source;
DataInputViewStream inputStream = new 
DataInputViewStream(source);
input = new NoFetchingInput(inputStream);
}
//  DataInputViewStream inputStream = new 
DataInputViewStream(source);
//  Input input = new NoFetchingInput(inputStream);
checkKryoPoolInitialization();
//  Kryo kryo = kryoPool.borrow();
try {
return (T) kryo.readClassAndObject(input);
} catch (KryoException ke) {
Throwable cause = ke.getCause();

if (cause instanceof EOFException) {
throw (EOFException) cause;
} else {
throw ke;
}
} finally {
//  kryoPool.release(kryo);
}
}
{code}

whereas this one worked:
{code}
@SuppressWarnings("unchecked")
@Override
public T deserialize(DataInputView source) throws IOException {
if (source != previousIn) {
previousIn = source;
//  DataInputViewStream inputStream = new 
DataInputViewStream(source);
//  input = new NoFetchingInput(inputStream);
}
DataInputViewStream inputStream = new 
DataInputViewStream(source);
Input input = new NoFetchingInput(inputStream);
checkKryoPoolInitialization();
//  Kryo kryo = kryoPool.borrow();
try {
return (T) kryo.readClassAndObject(input);
} catch (KryoException ke) {
Throwable cause = ke.getCause();

if (cause instanceof EOFException) {
throw (EOFException) cause;
} else {
throw ke;
}
} finally {

[jira] [Commented] (FLINK-2855) Add a documentation section for Gelly library methods

2015-10-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959033#comment-14959033
 ] 

ASF GitHub Bot commented on FLINK-2855:
---

GitHub user vasia opened a pull request:

https://github.com/apache/flink/pull/1258

[FLINK-2855] [gelly] Add documentation for the Gelly library methods

This PR adds detailed documentation about the available Gelly library 
methods and improves the javadocs of the library methods constructors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vasia/flink lib-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1258.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1258


commit 95e095507f72828106688c37e6704dd9a7344866
Author: vasia 
Date:   2015-10-15T14:49:19Z

[FLINK-2855] [gelly] Add documentation for the Gelly library algorithms and 
improved javadocs for the library constructors.




> Add a documentation section for Gelly library methods
> -
>
> Key: FLINK-2855
> URL: https://issues.apache.org/jira/browse/FLINK-2855
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Gelly
>Reporter: Vasia Kalavri
>Assignee: Vasia Kalavri
>  Labels: easyfix, newbie
>
> We should add a separate documentation section for the Gelly library methods. 
> For each method, we should have an overview of the used algorithm, 
> implementation details and usage information.
> You can find an example of what these should look like 
> [here|http://gellyschool.com/library.html].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2107) Implement Hash Outer Join algorithm

2015-10-15 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959060#comment-14959060
 ] 

Fabian Hueske commented on FLINK-2107:
--

Hi @chiwan, is it OK if I take over some parts of this issue?
I would like to get the "simple" hash outer join case into the next release 
because it would allow me to remove one limitation from the Cascading on Flink 
adapter.
This would be the case where the the left or right outer side is the probe side 
of the hash table.

Thanks, Fabian


> Implement Hash Outer Join algorithm
> ---
>
> Key: FLINK-2107
> URL: https://issues.apache.org/jira/browse/FLINK-2107
> Project: Flink
>  Issue Type: New Feature
>  Components: Local Runtime
>Reporter: Fabian Hueske
>Assignee: Chiwan Park
>Priority: Minor
> Fix For: pre-apache
>
>
> Flink does not natively support outer joins at the moment.
> This issue proposes to implement a hash outer join algorithm that can cover 
> left and right outer joins.
> The implementation can be based on the regular hash join iterators (for 
> example `ReusingBuildFirstHashMatchIterator` and 
> `NonReusingBuildFirstHashMatchIterator`, see also `MatchDriver` class)
> The Reusing and NonReusing variants differ in whether object instances are 
> reused or new objects are created. I would start with the NonReusing variant 
> which is safer from a user's point of view and should also be easier to 
> implement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2856) Introduce flink.version property into quickstart archetype

2015-10-15 Thread Ufuk Celebi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959130#comment-14959130
 ] 

Ufuk Celebi commented on FLINK-2856:


Good addition. I used to do this with my quickstarts as well.

> Introduce flink.version property into quickstart archetype
> --
>
> Key: FLINK-2856
> URL: https://issues.apache.org/jira/browse/FLINK-2856
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Robert Metzger
>Assignee: Robert Metzger
> Fix For: 0.10
>
>
> With the quickstarts we're currently creating, users have to manually change 
> all the dependencies if they're changing the flink version.
> I propose to introduce a property for setting the flink version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2151) Provide interface to distinguish close() calls in error and regular cases

2015-10-15 Thread Wilmer DAZA (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958913#comment-14958913
 ] 

Wilmer DAZA commented on FLINK-2151:


Hello guys

I was wondering if there is already a plan to integrate Flink with Cassandra? 
(read/write tables, send queries en CQL, etc).

I am more interested in writing to cassandra tables from the Stream API of 
Flink, but i guess that an integration would require not just writing from 
DataStream but a whole complete connector with many other common functionalities

I am making this comment here because I'm guessing that maybe some work in such 
a connector may be already started.

Thank you

> Provide interface to distinguish close() calls in error and regular cases
> -
>
> Key: FLINK-2151
> URL: https://issues.apache.org/jira/browse/FLINK-2151
> Project: Flink
>  Issue Type: Improvement
>  Components: Local Runtime
>Affects Versions: 0.9
>Reporter: Robert Metzger
>
> I was talking to somebody who is interested in contributing a 
> {{flink-cassandra}} connector.
> The connector will create cassandra files locally (on the TaskManagers) and 
> bulk-load them in the {{close()}} method.
> For the user functions it is currently not possible to find out whether the 
> function is closed due to an error or an regular end.
> The simplest approach would be passing an additional argument (enum or 
> boolean) into the close() method, indicating the type of closing.
> But that would break all existing code.
> Another approach would add an interface that has such an extended close 
> method {{RichCloseFunction}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2800) kryo serialization problem

2015-10-15 Thread Stefano Bortoli (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958995#comment-14958995
 ] 

Stefano Bortoli commented on FLINK-2800:


Hi guys. I have played around the KryoSerialization class. I have upgraded to 
Kryo 3.0.2 which offers a very convenient KryoPool with customizable 
KryoFactory, and then I moved in the method the instantiation of all the fields 
that were shared by serialization and deserialization method. I have also 
registered the classes (sometimes Kryo does not serialize the classes in the 
same order if you don't register them, and can cause problems). I still have to 
evaluate whether all of these changes are necessary, but with the kryoPool, 
registering the classes, and moving the Input and Output fields in the methods 
solved the exceptions above. I will keep on investigating.

Anyway, this shows that it was actually a race condition, perhaps because the 
KryoSerializer that is not cloned as expected along the execution chain.

> kryo serialization problem
> --
>
> Key: FLINK-2800
> URL: https://issues.apache.org/jira/browse/FLINK-2800
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 0.10
> Environment: linux ubuntu 12.04 LTS, Java 7
>Reporter: Stefano Bortoli
>
> Performing a cross of two dataset of POJOs I have got the exception below. 
> The first time I run the process, there was no problem. When I run it the 
> second time, I have got the exception. My guess is that it could be a race 
> condition related to the reuse of the Kryo serializer object. However, it 
> could also be "a bug where type registrations are not properly forwarded to 
> all Serializers", as suggested by Stephan.
> 
> 2015-10-01 18:18:21 INFO  JobClient:161 - 10/01/2015 18:18:21 Cross(Cross at 
> main(FlinkMongoHadoop2LinkPOI2CDA.java:160))(3/4) switched to FAILED 
> com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 
> 114
>   at 
> com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
>   at com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:641)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:752)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:210)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:127)
>   at 
> org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:30)
>   at 
> org.apache.flink.runtime.operators.resettable.AbstractBlockResettableIterator.getNextRecord(AbstractBlockResettableIterator.java:180)
>   at 
> org.apache.flink.runtime.operators.resettable.BlockResettableMutableObjectIterator.next(BlockResettableMutableObjectIterator.java:111)
>   at 
> org.apache.flink.runtime.operators.CrossDriver.runBlockedOuterSecond(CrossDriver.java:309)
>   at 
> org.apache.flink.runtime.operators.CrossDriver.run(CrossDriver.java:162)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:489)
>   at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:354)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-2858:
--
Component/s: Build System

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2522) Integrate Streaming Api into Flink-scala-shell

2015-10-15 Thread Nikolaas Steenbergen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959016#comment-14959016
 ] 

Nikolaas Steenbergen commented on FLINK-2522:
-

I'm working on writing a test case for the scala shell streaming, where it 
should create a local Streaming environment (by itself).
However if I run this test it gives me:
{code}
org.apache.flink.api.common.InvalidProgramException: The RemoteEnvironment 
cannot be used when submitting a program through a client, or running in a 
TestEnvironment context.
at 
org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.(RemoteStreamEnvironment.java:132)
at 
org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.(RemoteStreamEnvironment.java:103)
at 
org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.(RemoteStreamEnvironment.java:80)
...
{code}

Am I forced to just skip the test for the local streaming scala shell ? 
Or is there a workaround for this ? 

> Integrate Streaming Api into Flink-scala-shell
> --
>
> Key: FLINK-2522
> URL: https://issues.apache.org/jira/browse/FLINK-2522
> Project: Flink
>  Issue Type: Improvement
>  Components: Scala Shell
>Reporter: Nikolaas Steenbergen
>Assignee: Nikolaas Steenbergen
>
> startup scala shell with "-s" or "-streaming" flag to use the streaming api



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958887#comment-14958887
 ] 

Robert Metzger edited comment on FLINK-2858 at 10/15/15 1:26 PM:
-

tl;dr: This issue is due to a missing feature in IntelliJ (mvn itself is 
powerful enough to work properly)

The problem is that we're Flink is using the "scala-2.11" property to activate 
and deactivate the scala-2.10 and scala-2.11 profiles.
In flink-scala/pom.xml there is a scala-2.10 profile which activates itself 
even if the scala-2.11 profile is selected in IntelliJ.
I am not aware of a way of setting maven properties in IntelliJ.

I think [~aalexandrov] did it like this because we need to remove a dependency 
(scala macros) for scala-2.11.
We can probably also activate the profile with the regular profile activation 
and activate the scala-2.10 profile with "activateByDefault". 


was (Author: rmetzger):
The problem is that we're Flink is using the "scala-2.11" property to activate 
and deactivate the scala-2.10 and scala-2.11 profiles.
In flink-scala/pom.xml there is a scala-2.10 profile which activates itself 
even if the scala-2.11 profile is selected in IntelliJ.
I am not aware of a way of setting maven properties in IntelliJ.

I think [~aalexandrov] did it like this because we need to remove a dependency 
(scala macros) for scala-2.11.
We can probably also activate the profile with the regular profile activation 
and activate the scala-2.10 profile with "activateByDefault". 

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2856) Introduce flink.version property into quickstart archetype

2015-10-15 Thread Robert Metzger (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-2856.
---
   Resolution: Fixed
Fix Version/s: 0.10

Resolved in http://git-wip-us.apache.org/repos/asf/flink/commit/6cb0fb51

> Introduce flink.version property into quickstart archetype
> --
>
> Key: FLINK-2856
> URL: https://issues.apache.org/jira/browse/FLINK-2856
> Project: Flink
>  Issue Type: Improvement
>  Components: Quickstarts
>Reporter: Robert Metzger
>Assignee: Robert Metzger
> Fix For: 0.10
>
>
> With the quickstarts we're currently creating, users have to manually change 
> all the dependencies if they're changing the flink version.
> I propose to introduce a property for setting the flink version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Till Rohrmann (JIRA)
Till Rohrmann created FLINK-2858:


 Summary: Cannot build Flink Scala 2.11 with IntelliJ
 Key: FLINK-2858
 URL: https://issues.apache.org/jira/browse/FLINK-2858
 Project: Flink
  Issue Type: Bug
Affects Versions: 0.10
Reporter: Till Rohrmann


If I activate the scala-2.11 profile from within IntelliJ (and thus deactivate 
the scala-2.10 profile) in order to build Flink with Scala 2.11, then Flink 
cannot be built. The problem is that some Scala macros cannot be expanded 
because they were compiled with the wrong version (I assume 2.10).

This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2692) Untangle CsvInputFormat into PojoTypeCsvInputFormat and TupleTypeCsvInputFormat

2015-10-15 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959022#comment-14959022
 ] 

Chesnay Schepler commented on FLINK-2692:
-

is there any reason that prevents the scala api from using the CsvInputFormat 
class? they only differ in the createTuple method:

Java:
{code}
@Override
protected OUT createTuple(OUT reuse) {
Tuple result = (Tuple) reuse;
for (int i = 0; i < parsedValues.length; i++) {
result.setField(parsedValues[i], i);
}
return reuse;
}
{code}
Scala:
{code}
@Override
protected OUT createTuple(OUT reuse) {
Preconditions.checkNotNull(tupleSerializer, "The tuple 
serializer must be initialised." +
" It is not initialized if the given type was not a " +
TupleTypeInfoBase.class.getName() + ".");

return tupleSerializer.createInstance(parsedValues);
}
{code}

> Untangle CsvInputFormat into PojoTypeCsvInputFormat and 
> TupleTypeCsvInputFormat 
> 
>
> Key: FLINK-2692
> URL: https://issues.apache.org/jira/browse/FLINK-2692
> Project: Flink
>  Issue Type: Improvement
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> The {{CsvInputFormat}} currently allows to return values as a {{Tuple}} or a 
> {{Pojo}} type. As a consequence, the processing logic, which has to work for 
> both types, is overly complex. For example, the {{CsvInputFormat}} contains 
> fields which are only used when a Pojo is returned. Moreover, the pojo field 
> information are constructed by calling setter methods which have to be called 
> in a very specific order, otherwise they fail. E.g. one first has to call 
> {{setFieldTypes}} before calling {{setOrderOfPOJOFields}}, otherwise the 
> number of fields might be different. Furthermore, some of the methods can 
> only be called if the return type is a {{Pojo}} type, because they expect 
> that a {{PojoTypeInfo}} is present.
> I think the {{CsvInputFormat}} should be refactored to make the code more 
> easily maintainable. I propose to split it up into a 
> {{PojoTypeCsvInputFormat}} and a {{TupleTypeCsvInputFormat}} which take all 
> the required information via their constructors instead of using the 
> {{setFields}} and {{setOrderOfPOJOFields}} approach.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Alexander Alexandrov (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959689#comment-14959689
 ] 

Alexander Alexandrov edited comment on FLINK-2858 at 10/15/15 10:29 PM:


[~trohrm...@apache.org] After trying around for a while I managed to overcome 
the issue. Here are the required steps

* Run the `tools/change-scala-version.sh` script
* Run `mvn clean` from the console
* Optionally, forcefully activate the `scala-2.11` profile & deactivate 
`scala-2.10` from the IntelliJ Maven panel. Strictly speeking this is not 
needed because after running the script the default profile is changed to 
`scala-2.11`.
* In IntelliJ, click "Generate Sources and Update Folders for all Projects" in 
the same panel (but maybe a simple "Reimport all Maven projects" will be enough)
* In IntelliJ, run "Build > Rebuild Project"

This worked for me on the latest IntelliJ Ultimate (14.1.5).


was (Author: aalexandrov):
[~trohrm...@apache.org] After trying around for a while I managed to overcome 
the issue. Here are the steps

* Run the `tools/change-scala-version.sh` script
* Run `mvn clean` from the console
* Optionally, forcefully activate the `scala-2.11` profile & deactivate 
`scala-2.10` from the IntelliJ Maven panel. Strictly speeking this is not 
needed because after running the script the default profile is changed to 
`scala-2.11`.
* In IntelliJ, click "Generate Sources and Update Folders for all Projects" in 
the same panel (but maybe a simple "Reimport all Maven projects" will be enough)
* In IntelliJ, run "Build > Rebuild Project"

This worked for me on the latest IntelliJ Ultimate (14.1.5).

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Alexander Alexandrov (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959689#comment-14959689
 ] 

Alexander Alexandrov edited comment on FLINK-2858 at 10/15/15 10:28 PM:


[~trohrm...@apache.org] After trying around for a big I managed to overcome the 
issue. Here are the steps

* Run the `tools/change-scala-version.sh` script
* Run `mvn clean` from the console
* Optionally, forcefully activate the `scala-2.11` profile & deactivate 
`scala-2.10` from the IntelliJ Maven panel. Strictly speeking this is not 
needed because after running the script the default profile is changed to 
`scala-2.11`.
* In IntelliJ, click "Generate Sources and Update Folders for all Projects" in 
the same panel (but maybe a simple "Reimport all Maven projects" will be enough)
* In IntelliJ, run "Build > Rebuild Project"

This worked for me on the latest IntelliJ Ultimate (14.1.5).


was (Author: aalexandrov):
[~trohrm...@apache.org] After trying around for a big I managed to overcome the 
issue. Here are the steps

* Run the `tools/change-scala-version.sh` script
* Run `mvn clean` from the console
* Optionally, forcefully activate the `scala-2.11` profile & deactivate 
`scala-2.10` from the IntelliJ Maven panel. Strictly speeking this is not 
needed because after running the script the default profile is changed to 
`scala-2.11`.
* Click "Generate Sources and Update Folders for all Projects" in the same 
panel (but maybe a simple "Reimport all Maven projects" will be enough)
* Run "Build > Rebuild Project"

This worked for me on the latest IntelliJ Ultimate (14.1.5).

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Alexander Alexandrov (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959750#comment-14959750
 ] 

Alexander Alexandrov commented on FLINK-2858:
-

I think that the current build description in the docs does not properly 
reflect what needs to be done. 

All you need to do is run the `tools/change-scala-version.sh` script, the 
default Scala profile is then automatically changed.

I tried to update this in [PR 1260|https://github.com/apache/flink/pull/1260].

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Alexander Alexandrov (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959689#comment-14959689
 ] 

Alexander Alexandrov commented on FLINK-2858:
-

[~trohrm...@apache.org] After trying around for a big I managed to overcome the 
issue. Here are the steps

* Run the `tools/change-scala-version.sh` script
* Run `mvn clean` from the console
* Activate the `scala-2.11` profile & deactivate `scala-2.10` from the IntelliJ 
Maven panel
* Click "Generate Sources and Update Folders for all Projects" in the same 
panel (but maybe a simple "Reimport all Maven projects" will be enough)
* Run "Build > Rebuild Project"

This worked for me on the latest IntelliJ Ultimate (14.1.5).

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (FLINK-2858) Cannot build Flink Scala 2.11 with IntelliJ

2015-10-15 Thread Alexander Alexandrov (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959689#comment-14959689
 ] 

Alexander Alexandrov edited comment on FLINK-2858 at 10/15/15 10:27 PM:


[~trohrm...@apache.org] After trying around for a big I managed to overcome the 
issue. Here are the steps

* Run the `tools/change-scala-version.sh` script
* Run `mvn clean` from the console
* Optionally, forcefully activate the `scala-2.11` profile & deactivate 
`scala-2.10` from the IntelliJ Maven panel. Strictly speeking this is not 
needed because after running the script the default profile is changed to 
`scala-2.11`.
* Click "Generate Sources and Update Folders for all Projects" in the same 
panel (but maybe a simple "Reimport all Maven projects" will be enough)
* Run "Build > Rebuild Project"

This worked for me on the latest IntelliJ Ultimate (14.1.5).


was (Author: aalexandrov):
[~trohrm...@apache.org] After trying around for a big I managed to overcome the 
issue. Here are the steps

* Run the `tools/change-scala-version.sh` script
* Run `mvn clean` from the console
* Activate the `scala-2.11` profile & deactivate `scala-2.10` from the IntelliJ 
Maven panel
* Click "Generate Sources and Update Folders for all Projects" in the same 
panel (but maybe a simple "Reimport all Maven projects" will be enough)
* Run "Build > Rebuild Project"

This worked for me on the latest IntelliJ Ultimate (14.1.5).

> Cannot build Flink Scala 2.11 with IntelliJ
> ---
>
> Key: FLINK-2858
> URL: https://issues.apache.org/jira/browse/FLINK-2858
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 0.10
>Reporter: Till Rohrmann
>
> If I activate the scala-2.11 profile from within IntelliJ (and thus 
> deactivate the scala-2.10 profile) in order to build Flink with Scala 2.11, 
> then Flink cannot be built. The problem is that some Scala macros cannot be 
> expanded because they were compiled with the wrong version (I assume 2.10).
> This makes debugging tests with Scala 2.11 in IntelliJ impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Removed broken dependency to flink-spargel.

2015-10-15 Thread aalexandrov
GitHub user aalexandrov opened a pull request:

https://github.com/apache/flink/pull/1259

Removed broken dependency to flink-spargel.

I think that this should have been removed as part of #1229.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aalexandrov/flink patch-2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1259.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1259


commit dff19c4b27f201cfcbe104073005bc9aad2cbb45
Author: Alexander Alexandrov 
Date:   2015-10-15T21:18:39Z

Removed broken dependency to flink-spargel.

I think that this should have been removed as part of #1229.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >