[jira] [Commented] (FLINK-3919) Distributed Linear Algebra: row-based matrix

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313587#comment-15313587
 ] 

ASF GitHub Bot commented on FLINK-3919:
---

Github user chiwanpark commented on a diff in the pull request:

https://github.com/apache/flink/pull/1996#discussion_r65657170
  
--- Diff: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/math/distributed/DistributedRowMatrix.scala
 ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.math.distributed
+
+import breeze.linalg.{CSCMatrix => BreezeSparseMatrix, Matrix => 
BreezeMatrix, Vector => BreezeVector}
+import org.apache.flink.api.scala._
+import org.apache.flink.ml.math.Breeze._
+import org.apache.flink.ml.math.{Matrix => FlinkMatrix, _}
+
+/**
+  * Distributed row-major matrix representation.
+  * @param numRows Number of rows.
+  * @param numCols Number of columns.
+  */
+class DistributedRowMatrix(val data: DataSet[IndexedRow],
+   val numRows: Int,
+   val numCols: Int )
+extends DistributedMatrix {
+
+
+
+  /**
+* Collects the data in the form of a sequence of coordinates 
associated with their values.
+* @return
+*/
+  def toCOO: Seq[(Int, Int, Double)] = {
+
+val localRows = data.collect()
+
+for (IndexedRow(rowIndex, vector) <- localRows;
+ (columnIndex, value) <- vector) yield (rowIndex, columnIndex, 
value)
+  }
+
+  /**
+* Collects the data in the form of a SparseMatrix
+* @return
+*/
+  def toLocalSparseMatrix: SparseMatrix = {
+val localMatrix =
+  SparseMatrix.fromCOO(this.numRows, this.numCols, this.toCOO)
+require(localMatrix.numRows == this.numRows)
+require(localMatrix.numCols == this.numCols)
+localMatrix
+  }
+
+  //TODO: convert to dense representation on the distributed matrix and 
collect it afterward
+  def toLocalDenseMatrix: DenseMatrix = 
this.toLocalSparseMatrix.toDenseMatrix
+
+  /**
+* Apply a high-order function to couple of rows
+* @param fun
+* @param other
+* @return
+*/
+  def byRowOperation(fun: (Vector, Vector) => Vector,
+ other: DistributedRowMatrix): DistributedRowMatrix = {
+val otherData = other.data
+require(this.numCols == other.numCols)
+require(this.numRows == other.numRows)
+
+val result = this.data
+  .fullOuterJoin(otherData)
+  .where("rowIndex")
+  .equalTo("rowIndex")(
+  (left: IndexedRow, right: IndexedRow) => {
+val row1 = Option(left) match {
+  case Some(row: IndexedRow) => row
+  case None =>
+IndexedRow(
+right.rowIndex,
+SparseVector.fromCOO(right.values.size, List((0, 
0.0
+}
+val row2 = Option(right) match {
+  case Some(row: IndexedRow) => row
+  case None =>
+IndexedRow(
+left.rowIndex,
+SparseVector.fromCOO(left.values.size, List((0, 0.0
+}
+IndexedRow(row1.rowIndex, fun(row1.values, row2.values))
+  }
+  )
+new DistributedRowMatrix(result, numRows, numCols)
+  }
+
+  /**
+* Add the matrix to another matrix.
+* @param other
+* @return
+*/
+  def sum(other: DistributedRowMatrix): DistributedRowMatrix = {
+val sumFunction: (Vector, Vector) => Vector = (x: Vector, y: Vector) =>
+  (x.asBreeze + y.asBreeze).fromBreeze
+this.byRowOperation(sumFunction, other)
+  }
+
+  /**
+* Subtracts another matrix.
+* @param other
+* @return
+*/
+  def 

[GitHub] flink pull request #1996: [FLINK-3919][flink-ml] Distributed Linear Algebra:...

2016-06-02 Thread chiwanpark
Github user chiwanpark commented on a diff in the pull request:

https://github.com/apache/flink/pull/1996#discussion_r65657170
  
--- Diff: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/math/distributed/DistributedRowMatrix.scala
 ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.math.distributed
+
+import breeze.linalg.{CSCMatrix => BreezeSparseMatrix, Matrix => 
BreezeMatrix, Vector => BreezeVector}
+import org.apache.flink.api.scala._
+import org.apache.flink.ml.math.Breeze._
+import org.apache.flink.ml.math.{Matrix => FlinkMatrix, _}
+
+/**
+  * Distributed row-major matrix representation.
+  * @param numRows Number of rows.
+  * @param numCols Number of columns.
+  */
+class DistributedRowMatrix(val data: DataSet[IndexedRow],
+   val numRows: Int,
+   val numCols: Int )
+extends DistributedMatrix {
+
+
+
+  /**
+* Collects the data in the form of a sequence of coordinates 
associated with their values.
+* @return
+*/
+  def toCOO: Seq[(Int, Int, Double)] = {
+
+val localRows = data.collect()
+
+for (IndexedRow(rowIndex, vector) <- localRows;
+ (columnIndex, value) <- vector) yield (rowIndex, columnIndex, 
value)
+  }
+
+  /**
+* Collects the data in the form of a SparseMatrix
+* @return
+*/
+  def toLocalSparseMatrix: SparseMatrix = {
+val localMatrix =
+  SparseMatrix.fromCOO(this.numRows, this.numCols, this.toCOO)
+require(localMatrix.numRows == this.numRows)
+require(localMatrix.numCols == this.numCols)
+localMatrix
+  }
+
+  //TODO: convert to dense representation on the distributed matrix and 
collect it afterward
+  def toLocalDenseMatrix: DenseMatrix = 
this.toLocalSparseMatrix.toDenseMatrix
+
+  /**
+* Apply a high-order function to couple of rows
+* @param fun
+* @param other
+* @return
+*/
+  def byRowOperation(fun: (Vector, Vector) => Vector,
+ other: DistributedRowMatrix): DistributedRowMatrix = {
+val otherData = other.data
+require(this.numCols == other.numCols)
+require(this.numRows == other.numRows)
+
+val result = this.data
+  .fullOuterJoin(otherData)
+  .where("rowIndex")
+  .equalTo("rowIndex")(
+  (left: IndexedRow, right: IndexedRow) => {
+val row1 = Option(left) match {
+  case Some(row: IndexedRow) => row
+  case None =>
+IndexedRow(
+right.rowIndex,
+SparseVector.fromCOO(right.values.size, List((0, 
0.0
+}
+val row2 = Option(right) match {
+  case Some(row: IndexedRow) => row
+  case None =>
+IndexedRow(
+left.rowIndex,
+SparseVector.fromCOO(left.values.size, List((0, 0.0
+}
+IndexedRow(row1.rowIndex, fun(row1.values, row2.values))
+  }
+  )
+new DistributedRowMatrix(result, numRows, numCols)
+  }
+
+  /**
+* Add the matrix to another matrix.
+* @param other
+* @return
+*/
+  def sum(other: DistributedRowMatrix): DistributedRowMatrix = {
+val sumFunction: (Vector, Vector) => Vector = (x: Vector, y: Vector) =>
+  (x.asBreeze + y.asBreeze).fromBreeze
+this.byRowOperation(sumFunction, other)
+  }
+
+  /**
+* Subtracts another matrix.
+* @param other
+* @return
+*/
+  def subtract(other: DistributedRowMatrix): DistributedRowMatrix = {
+val subFunction: (Vector, Vector) => Vector = (x: Vector, y: Vector) =>
+  (x.asBreeze - y.asBreeze).fromBreeze
+this.byRowOperation(subFunction, other)

[jira] [Commented] (FLINK-1873) Distributed matrix implementation

2016-06-02 Thread Chiwan Park (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15313585#comment-15313585
 ] 

Chiwan Park commented on FLINK-1873:


I think we can use this issue (FLINK-1873) as finalizing issue including the 
documentation. 

> Distributed matrix implementation
> -
>
> Key: FLINK-1873
> URL: https://issues.apache.org/jira/browse/FLINK-1873
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: liaoyuxi
>Assignee: Simone Robutti
>  Labels: ML
>
> It would help to implement machine learning algorithm more quickly and 
> concise if Flink would provide support for storing data and computation in 
> distributed matrix. The design of the implementation is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4013) GraphAlgorithms to simplify directed and undirected graphs

2016-06-02 Thread Greg Hogan (JIRA)
Greg Hogan created FLINK-4013:
-

 Summary: GraphAlgorithms to simplify directed and undirected graphs
 Key: FLINK-4013
 URL: https://issues.apache.org/jira/browse/FLINK-4013
 Project: Flink
  Issue Type: New Feature
  Components: Gelly
Affects Versions: 1.1.0
Reporter: Greg Hogan
Assignee: Greg Hogan
Priority: Minor
 Fix For: 1.1.0


Create a directed {{GraphAlgorithm}} to remove self-loops and duplicate edges 
and an undirected {{GraphAlgorithm}} to symmetrize and remove self-loops and 
duplicate edges.

Remove {{RMatGraph.setSimpleGraph}} and the associated logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4008) Hello,I am not able to create jar of flink-streaming-connectors ...I am able to create jar of others like twitter,kafka,flume but I am not able to create jar of flink-stre

2016-06-02 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-4008.
---
Resolution: Not A Problem

Please post questions to the user mailing list.

> Hello,I am not able to create jar of flink-streaming-connectors ...I am able 
> to create jar of others like twitter,kafka,flume but I am not able to create 
> jar of flink-streaming connectors ?? How can I create this jar ??
> ---
>
> Key: FLINK-4008
> URL: https://issues.apache.org/jira/browse/FLINK-4008
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming, Streaming Connectors
>Affects Versions: 1.1.0
>Reporter: Akshay Shingote
>
> I filed an issue here https://github.com/apache/flink/pull/2058 ... I want to 
> know how can we create jar of Flink-Streaming-Connectors ??



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4007) Flink Kafka Consumer throws Null Pointer Exception when using DataStream keyBy

2016-06-02 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-4007.
---
Resolution: Not A Problem

I responded to you SO question.

> Flink Kafka Consumer throws Null Pointer Exception when using DataStream keyBy
> --
>
> Key: FLINK-4007
> URL: https://issues.apache.org/jira/browse/FLINK-4007
> Project: Flink
>  Issue Type: Bug
>  Components: CEP, DataStream API
>Affects Versions: 1.0.1
>Reporter: Akshay Shingote
>
> Hello,yesterday I filed an issue here 
> http://stackoverflow.com/questions/37568822/flink-kafka-consumer-throws-null-pointer-exception-when-using-datastream-key-by
>    I want to know how to resolve this issue ?? I am not finding any other 
> example of Flink Kafka Consumer...Thank You



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1979) Implement Loss Functions

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312435#comment-15312435
 ] 

ASF GitHub Bot commented on FLINK-1979:
---

Github user chiwanpark commented on the issue:

https://github.com/apache/flink/pull/1985
  
Okay, please ping me when the PR is updated.


> Implement Loss Functions
> 
>
> Key: FLINK-1979
> URL: https://issues.apache.org/jira/browse/FLINK-1979
> Project: Flink
>  Issue Type: Improvement
>  Components: Machine Learning Library
>Reporter: Johannes Günther
>Assignee: Johannes Günther
>Priority: Minor
>  Labels: ML
>
> For convex optimization problems, optimizer methods like SGD rely on a 
> pluggable implementation of a loss function and its first derivative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #1985: [FLINK-1979] Add logistic loss, hinge loss and regulariza...

2016-06-02 Thread chiwanpark
Github user chiwanpark commented on the issue:

https://github.com/apache/flink/pull/1985
  
Okay, please ping me when the PR is updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-4010) Scala Shell tests may fail because of a locked STDIN

2016-06-02 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels closed FLINK-4010.
-
Resolution: Not A Problem

The Shell doesn't wait for input at STDIN in the tests.

> Scala Shell tests may fail because of a locked STDIN
> 
>
> Key: FLINK-4010
> URL: https://issues.apache.org/jira/browse/FLINK-4010
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
> Fix For: 1.1.0
>
>
> The Surefire plugin uses STDIN to communicate with forked processes. When the 
> Surefire plugin and the Scala Shell synchronize on the STDIN this may result 
> in a deadlock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-3806) Revert use of DataSet.count() in Gelly

2016-06-02 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-3806.
-
Resolution: Fixed

Fixed in 36ad78c0821fdae0a69371c67602dd2a7955e4a8

> Revert use of DataSet.count() in Gelly
> --
>
> Key: FLINK-3806
> URL: https://issues.apache.org/jira/browse/FLINK-3806
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Critical
> Fix For: 1.1.0
>
>
> FLINK-1632 replaced {{GraphUtils.count}} with {{DataSetUtils.count}}. The 
> former returns a {{DataSet}} while the latter executes a job to return a Java 
> value.
> {{DataSetUtils.count}} is called from {{Graph.numberOfVertices}} and 
> {{Graph.numberOfEdges}} which are called from {{GatherSumApplyIteration}} and 
> {{ScatterGatherIteration}} as well as the {{PageRank}} algorithms when the 
> user does not pass the number of vertices as a parameter.
> As noted in FLINK-1632, this does make the code simpler but if my 
> understanding is correct will materialize the Graph twice. The Graph will 
> need to be reread from input, regenerated, or recomputed by preceding 
> algorithms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-3945) Degree annotation for directed graphs

2016-06-02 Thread Greg Hogan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Hogan closed FLINK-3945.
-
Resolution: Implemented

Implemented in 65545c2ed46c17df6abd85299b91bad2529cd42c

> Degree annotation for directed graphs
> -
>
> Key: FLINK-3945
> URL: https://issues.apache.org/jira/browse/FLINK-3945
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.1.0
>
>
> There is a third degree count for vertices in directed graphs which is the 
> distinct count of out- and in-neighbors. This also adds edge annotation of 
> the vertex degrees for directed graphs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3945) Degree annotation for directed graphs

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312399#comment-15312399
 ] 

ASF GitHub Bot commented on FLINK-3945:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2021


> Degree annotation for directed graphs
> -
>
> Key: FLINK-3945
> URL: https://issues.apache.org/jira/browse/FLINK-3945
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.1.0
>
>
> There is a third degree count for vertices in directed graphs which is the 
> distinct count of out- and in-neighbors. This also adds edge annotation of 
> the vertex degrees for directed graphs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3806) Revert use of DataSet.count() in Gelly

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312398#comment-15312398
 ] 

ASF GitHub Bot commented on FLINK-3806:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2036


> Revert use of DataSet.count() in Gelly
> --
>
> Key: FLINK-3806
> URL: https://issues.apache.org/jira/browse/FLINK-3806
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Critical
> Fix For: 1.1.0
>
>
> FLINK-1632 replaced {{GraphUtils.count}} with {{DataSetUtils.count}}. The 
> former returns a {{DataSet}} while the latter executes a job to return a Java 
> value.
> {{DataSetUtils.count}} is called from {{Graph.numberOfVertices}} and 
> {{Graph.numberOfEdges}} which are called from {{GatherSumApplyIteration}} and 
> {{ScatterGatherIteration}} as well as the {{PageRank}} algorithms when the 
> user does not pass the number of vertices as a parameter.
> As noted in FLINK-1632, this does make the code simpler but if my 
> understanding is correct will materialize the Graph twice. The Graph will 
> need to be reread from input, regenerated, or recomputed by preceding 
> algorithms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2036: [FLINK-3806] [gelly] Revert use of DataSet.count()

2016-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2036


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #2021: [FLINK-3945] [gelly] Degree annotation for directe...

2016-06-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/2021


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4012) Docs: Links to "Iterations" are broken (404)

2016-06-02 Thread Bernd Louis (JIRA)
Bernd Louis created FLINK-4012:
--

 Summary: Docs: Links to "Iterations" are broken (404) 
 Key: FLINK-4012
 URL: https://issues.apache.org/jira/browse/FLINK-4012
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.0.2, 1.1.0
Reporter: Bernd Louis
Priority: Trivial
 Fix For: 1.1.0, 1.0.2


1. Browse: 
https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/common/index.html
 or 
https://ci.apache.org/projects/flink/flink-docs-master/apis/common/index.html
2. Find the text "information on iterations (see Iterations)."
3. Click "Iterations" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2065: [FLINK-4011] Keep UserCodeClassLoader in archived ...

2016-06-02 Thread rmetzger
GitHub user rmetzger opened a pull request:

https://github.com/apache/flink/pull/2065

[FLINK-4011] Keep UserCodeClassLoader in archived ExecutionGraphs

Currently, completed jobs cannot be accessed in the web frontend, because 
the classloader passed to `SerializedValue` is always null.

There are different approaches to resolve this issue:
- Use the system classloader to deserialize the EC. This means that as soon 
as the EC contains user code, we can not deserialize it. The web frontent will 
show fewer information
- In `ExecutionGraph.prepareForArchiving()`, we deserialize the EC into a 
special field, then we set the user code classloader free for GCing. This would 
be a hacky approach because we would have two ECs (serialized, regular 
instance) in the EG.
- Keep the usercodeclassloader in the EC. This means the classes of the job 
can not be unloaded from the JobManager JVM until the job has been removed from 
the JM history.

I'm open for discussing more approaches or alternative solutions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rmetzger/flink flink4011

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2065.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2065


commit 388e75c95cd4e63419af429102e8afcfcd11cb8f
Author: Robert Metzger 
Date:   2016-06-02T14:08:07Z

[FLINK-4011] Keep UserCodeClassLoader in archived ExecutionGraphs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4011) Unable to access completed job in web frontend

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312374#comment-15312374
 ] 

ASF GitHub Bot commented on FLINK-4011:
---

GitHub user rmetzger opened a pull request:

https://github.com/apache/flink/pull/2065

[FLINK-4011] Keep UserCodeClassLoader in archived ExecutionGraphs

Currently, completed jobs cannot be accessed in the web frontend, because 
the classloader passed to `SerializedValue` is always null.

There are different approaches to resolve this issue:
- Use the system classloader to deserialize the EC. This means that as soon 
as the EC contains user code, we can not deserialize it. The web frontent will 
show fewer information
- In `ExecutionGraph.prepareForArchiving()`, we deserialize the EC into a 
special field, then we set the user code classloader free for GCing. This would 
be a hacky approach because we would have two ECs (serialized, regular 
instance) in the EG.
- Keep the usercodeclassloader in the EC. This means the classes of the job 
can not be unloaded from the JobManager JVM until the job has been removed from 
the JM history.

I'm open for discussing more approaches or alternative solutions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rmetzger/flink flink4011

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2065.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2065


commit 388e75c95cd4e63419af429102e8afcfcd11cb8f
Author: Robert Metzger 
Date:   2016-06-02T14:08:07Z

[FLINK-4011] Keep UserCodeClassLoader in archived ExecutionGraphs




> Unable to access completed job in web frontend
> --
>
> Key: FLINK-4011
> URL: https://issues.apache.org/jira/browse/FLINK-4011
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 1.1.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>
> In the current master, I'm not able to access a finished job's detail page.
> The JobManager logs shows the following exception:
> {code}
> 2016-06-02 15:23:08,581 WARN  
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler - Error while 
> handling request
> java.lang.RuntimeException: Couldn't deserialize ExecutionConfig.
> at 
> org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:52)
> at 
> org.apache.flink.runtime.webmonitor.handlers.AbstractExecutionGraphRequestHandler.handleRequest(AbstractExecutionGraphRequestHandler.java:61)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.respondAsLeader(RuntimeMonitorHandler.java:88)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:84)
> at 
> org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:44)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
> at 
> io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:105)
> at 
> org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at 
> 

[jira] [Updated] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels updated FLINK-4009:
--
Issue Type: Sub-task  (was: Test)
Parent: FLINK-3454

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Sub-task
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder when executing with Intellij. This is due to its working directory 
> being expected in "flink-scala-shell/target" when it is in fact 
> "flink-scala-shell". When executed with Maven, this works fine because the 
> Shade plugin changes the basedir from the project root to the /target 
> folder*. 
> As per [~till.rohrmann] and [~greghogan] suggestions we could simply add 
> flink-ml as a test dependency and look for the jar path in the classpath.
> \* Because we have the dependencyReducedPomLocation set to /target/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels updated FLINK-4009:
--
Description: 
The Scala Shell test fails to find the flink-ml library jar in the target 
folder when executing with Intellij. This is due to its working directory being 
expected in "flink-scala-shell/target" when it is in fact "flink-scala-shell". 
When executed with Maven, this works fine because the Shade plugin changes the 
basedir from the project root to the /target folder*. 

As per [~till.rohrmann] and [~greghogan] suggestions we could simply add 
flink-ml as a test dependency and look for the jar path in the classpath.

\* Because we have the dependencyReducedPomLocation set to /target/.


  was:
The Scala Shell test fails to find the flink-ml library jar in the target 
folder when executing with Intellij. This is due to its working directory being 
expected in "flink-scala-shell/target" when it is in fact "flink-scala-shell". 
When executed with Maven, this works fine because the Shade plugin changes the 
basedir from the project root to the /target folder*. 

As per [~till.rohrmann] and [~greghogan] suggestions we could simply add 
flink-ml as a test dependency and look for the jar path in the classpath.

* Because we have the dependencyReducedPomLocation set to /target/.



> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder when executing with Intellij. This is due to its working directory 
> being expected in "flink-scala-shell/target" when it is in fact 
> "flink-scala-shell". When executed with Maven, this works fine because the 
> Shade plugin changes the basedir from the project root to the /target 
> folder*. 
> As per [~till.rohrmann] and [~greghogan] suggestions we could simply add 
> flink-ml as a test dependency and look for the jar path in the classpath.
> \* Because we have the dependencyReducedPomLocation set to /target/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels updated FLINK-4009:
--
Description: 
The Scala Shell test fails to find the flink-ml library jar in the target 
folder when executing with Intellij. This is due to its working directory being 
expected in "flink-scala-shell/target" when it is in fact "flink-scala-shell". 
When executed with Maven, this works fine because the Shade plugin changes the 
basedir from the project root to the /target folder*. 

As per [~till.rohrmann] and [~greghogan] suggestions we could simply add 
flink-ml as a test dependency and look for the jar path in the classpath.

* Because we have the dependencyReducedPomLocation set to /target/.


  was:The Scala Shell test fails to find the flink-ml library jar in the target 
folder. This is due to its working directory being expected in 
"flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
puzzled why that could have changed recently. The last incident I recall where 
we had to change paths was when we introduced shading of all artifacts to 
produce effective poms (via the force-shading module). I'm assuming the change 
of paths has to do with switching from Failsafe to Surefire in FLINK-3909.


> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder when executing with Intellij. This is due to its working directory 
> being expected in "flink-scala-shell/target" when it is in fact 
> "flink-scala-shell". When executed with Maven, this works fine because the 
> Shade plugin changes the basedir from the project root to the /target 
> folder*. 
> As per [~till.rohrmann] and [~greghogan] suggestions we could simply add 
> flink-ml as a test dependency and look for the jar path in the classpath.
> * Because we have the dependencyReducedPomLocation set to /target/.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312296#comment-15312296
 ] 

Maximilian Michels commented on FLINK-4009:
---

Yes, this issue could be a sub task of FLINK-3454.

You added the flink-ml dependency in the pom which ensures the library is added 
to the test's class path. Then, the classpath is searched by the test which 
could be fixed in this issue.

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder. This is due to its working directory being expected in 
> "flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
> puzzled why that could have changed recently. The last incident I recall 
> where we had to change paths was when we introduced shading of all artifacts 
> to produce effective poms (via the force-shading module). I'm assuming the 
> change of paths has to do with switching from Failsafe to Surefire in 
> FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels reopened FLINK-4009:
---

Reopening and converting to sub task of FLINK-3454

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder. This is due to its working directory being expected in 
> "flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
> puzzled why that could have changed recently. The last incident I recall 
> where we had to change paths was when we introduced shading of all artifacts 
> to produce effective poms (via the force-shading module). I'm assuming the 
> change of paths has to do with switching from Failsafe to Surefire in 
> FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4011) Unable to access completed job in web frontend

2016-06-02 Thread Robert Metzger (JIRA)
Robert Metzger created FLINK-4011:
-

 Summary: Unable to access completed job in web frontend
 Key: FLINK-4011
 URL: https://issues.apache.org/jira/browse/FLINK-4011
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend
Affects Versions: 1.1.0
Reporter: Robert Metzger
Assignee: Robert Metzger
Priority: Critical


In the current master, I'm not able to access a finished job's detail page.

The JobManager logs shows the following exception:

{code}
2016-06-02 15:23:08,581 WARN  
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler - Error while 
handling request
java.lang.RuntimeException: Couldn't deserialize ExecutionConfig.
at 
org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:52)
at 
org.apache.flink.runtime.webmonitor.handlers.AbstractExecutionGraphRequestHandler.handleRequest(AbstractExecutionGraphRequestHandler.java:61)
at 
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandler.respondAsLeader(RuntimeMonitorHandler.java:88)
at 
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:84)
at 
org.apache.flink.runtime.webmonitor.RuntimeMonitorHandlerBase.channelRead0(RuntimeMonitorHandlerBase.java:44)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
at 
io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
at 
io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:105)
at 
org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
at 
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.flink.util.SerializedValue.deserializeValue(SerializedValue.java:55)
at 
org.apache.flink.runtime.webmonitor.handlers.JobConfigHandler.handleRequest(JobConfigHandler.java:50)
... 31 more

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3806) Revert use of DataSet.count() in Gelly

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312282#comment-15312282
 ] 

ASF GitHub Bot commented on FLINK-3806:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2036
  
Will merge this ...


> Revert use of DataSet.count() in Gelly
> --
>
> Key: FLINK-3806
> URL: https://issues.apache.org/jira/browse/FLINK-3806
> Project: Flink
>  Issue Type: Improvement
>  Components: Gelly
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Critical
> Fix For: 1.1.0
>
>
> FLINK-1632 replaced {{GraphUtils.count}} with {{DataSetUtils.count}}. The 
> former returns a {{DataSet}} while the latter executes a job to return a Java 
> value.
> {{DataSetUtils.count}} is called from {{Graph.numberOfVertices}} and 
> {{Graph.numberOfEdges}} which are called from {{GatherSumApplyIteration}} and 
> {{ScatterGatherIteration}} as well as the {{PageRank}} algorithms when the 
> user does not pass the number of vertices as a parameter.
> As noted in FLINK-1632, this does make the code simpler but if my 
> understanding is correct will materialize the Graph twice. The Graph will 
> need to be reread from input, regenerated, or recomputed by preceding 
> algorithms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2036: [FLINK-3806] [gelly] Revert use of DataSet.count()

2016-06-02 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/2036
  
Will merge this ...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312269#comment-15312269
 ] 

Greg Hogan commented on FLINK-4009:
---

I'm not seeing this a duplicate but rather that FLINK-3454 depends on this 
ticket per Till's solution. I don't have a fix to extract the jar location from 
the classpath.

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder. This is due to its working directory being expected in 
> "flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
> puzzled why that could have changed recently. The last incident I recall 
> where we had to change paths was when we introduced shading of all artifacts 
> to produce effective poms (via the force-shading module). I'm assuming the 
> change of paths has to do with switching from Failsafe to Surefire in 
> FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4010) Scala Shell tests may fail because of a locked STDIN

2016-06-02 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4010:
-

 Summary: Scala Shell tests may fail because of a locked STDIN
 Key: FLINK-4010
 URL: https://issues.apache.org/jira/browse/FLINK-4010
 Project: Flink
  Issue Type: Test
  Components: Scala Shell, Tests
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
 Fix For: 1.1.0


The Surefire plugin uses STDIN to communicate with forked processes. When the 
Surefire plugin and the Scala Shell synchronize on the STDIN this may result in 
a deadlock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312256#comment-15312256
 ] 

Maximilian Michels commented on FLINK-4009:
---

By "using the classpath" I mean extracting the jar location from the classpath 
provided for the tests.

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder. This is due to its working directory being expected in 
> "flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
> puzzled why that could have changed recently. The last incident I recall 
> where we had to change paths was when we introduced shading of all artifacts 
> to produce effective poms (via the force-shading module). I'm assuming the 
> change of paths has to do with switching from Failsafe to Surefire in 
> FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels closed FLINK-4009.
-
Resolution: Duplicate

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder. This is due to its working directory being expected in 
> "flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
> puzzled why that could have changed recently. The last incident I recall 
> where we had to change paths was when we introduced shading of all artifacts 
> to produce effective poms (via the force-shading module). I'm assuming the 
> change of paths has to do with switching from Failsafe to Surefire in 
> FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312254#comment-15312254
 ] 

Maximilian Michels commented on FLINK-4009:
---

Yes, it is. I was running the test from IntelliJ. The baseDir is set to the 
project directory there. Running the test there didn't work. However, in the 
builds the Maven Shade plugin changes it to /target and then the path works 
fine.

I created a patch to look in both variants of the basedir. However, I think 
Till's solution to use the classpath is much better.

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder. This is due to its working directory being expected in 
> "flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
> puzzled why that could have changed recently. The last incident I recall 
> where we had to change paths was when we introduced shading of all artifacts 
> to produce effective poms (via the force-shading module). I'm assuming the 
> change of paths has to do with switching from Failsafe to Surefire in 
> FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312234#comment-15312234
 ] 

Greg Hogan commented on FLINK-4009:
---

Is this fix related to [~till.rohrmann]'s suggested fix for FLINK-3454?

> What could help, though, is to find out the location of `flink-ml` and 
> `flink-dist` from the provided classpath.

> Scala Shell fails to find library for inclusion in test
> ---
>
> Key: FLINK-4009
> URL: https://issues.apache.org/jira/browse/FLINK-4009
> Project: Flink
>  Issue Type: Test
>  Components: Scala Shell, Tests
>Affects Versions: 1.1.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>  Labels: test-stability
> Fix For: 1.1.0
>
>
> The Scala Shell test fails to find the flink-ml library jar in the target 
> folder. This is due to its working directory being expected in 
> "flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
> puzzled why that could have changed recently. The last incident I recall 
> where we had to change paths was when we introduced shading of all artifacts 
> to produce effective poms (via the force-shading module). I'm assuming the 
> change of paths has to do with switching from Failsafe to Surefire in 
> FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3980) Remove ExecutionConfig.PARALLELISM_UNKNOWN

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312216#comment-15312216
 ] 

ASF GitHub Bot commented on FLINK-3980:
---

GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/2064

[FLINK-3980] [core] Remove ExecutionConfig.PARALLELISM_UNKNOWN



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
3980_remove_executionconfig_parallelism_unknown

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2064.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2064


commit 84f688d55469c0e61c4f16fef6a58e6c9fbce4ea
Author: Greg Hogan 
Date:   2016-05-26T19:34:29Z

[FLINK-3980] [core] Remove ExecutionConfig.PARALLELISM_UNKNOWN




> Remove ExecutionConfig.PARALLELISM_UNKNOWN
> --
>
> Key: FLINK-3980
> URL: https://issues.apache.org/jira/browse/FLINK-3980
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.1.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>Priority: Minor
> Fix For: 1.1.0
>
>
> FLINK-3589 added {{ExecutionConfig.PARALLELISM_DEFAULT}} and 
> {{ExecutionConfig.PARALLELISM_UNKNOWN}}. The former gave a name to the 
> contant {{-1}}  and the latter was used as a default no-op when setting the 
> parallelism.
> It's nice to keep these intents separate but given the current implementation 
> of Operator parallelism users can get by using {{PARALLELISM_DEFAULT}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2064: [FLINK-3980] [core] Remove ExecutionConfig.PARALLE...

2016-06-02 Thread greghogan
GitHub user greghogan opened a pull request:

https://github.com/apache/flink/pull/2064

[FLINK-3980] [core] Remove ExecutionConfig.PARALLELISM_UNKNOWN



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/greghogan/flink 
3980_remove_executionconfig_parallelism_unknown

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2064.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2064


commit 84f688d55469c0e61c4f16fef6a58e6c9fbce4ea
Author: Greg Hogan 
Date:   2016-05-26T19:34:29Z

[FLINK-3980] [core] Remove ExecutionConfig.PARALLELISM_UNKNOWN




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4009) Scala Shell fails to find library for inclusion in test

2016-06-02 Thread Maximilian Michels (JIRA)
Maximilian Michels created FLINK-4009:
-

 Summary: Scala Shell fails to find library for inclusion in test
 Key: FLINK-4009
 URL: https://issues.apache.org/jira/browse/FLINK-4009
 Project: Flink
  Issue Type: Test
  Components: Scala Shell, Tests
Affects Versions: 1.1.0
Reporter: Maximilian Michels
Assignee: Maximilian Michels
 Fix For: 1.1.0


The Scala Shell test fails to find the flink-ml library jar in the target 
folder. This is due to its working directory being expected in 
"flink-scala-shell/target" when it is in fact "flink-scala-shell". I'm a bit 
puzzled why that could have changed recently. The last incident I recall where 
we had to change paths was when we introduced shading of all artifacts to 
produce effective poms (via the force-shading module). I'm assuming the change 
of paths has to do with switching from Failsafe to Surefire in FLINK-3909.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2063: [FLINK-4002] [py] Improve testing infraestructure

2016-06-02 Thread omaralvarez
Github user omaralvarez commented on the issue:

https://github.com/apache/flink/pull/2063
  
Ok, I will modify it and commit the corrected version.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4002) [py] Improve testing infraestructure

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312111#comment-15312111
 ] 

ASF GitHub Bot commented on FLINK-4002:
---

Github user omaralvarez commented on the issue:

https://github.com/apache/flink/pull/2063
  
Ok, I will modify it and commit the corrected version.


> [py] Improve testing infraestructure
> 
>
> Key: FLINK-4002
> URL: https://issues.apache.org/jira/browse/FLINK-4002
> Project: Flink
>  Issue Type: Bug
>  Components: Python API
>Affects Versions: 1.0.3
>Reporter: Omar Alvarez
>Priority: Minor
>  Labels: Python, Testing
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The Verify() test function errors out when array elements are missing:
> {code}
> env.generate_sequence(1, 5)\
>  .map(Id()).map_partition(Verify([1,2,3,4], "Sequence")).output()
> {code}
> {quote}
> IndexError: list index out of range
> {quote}
> There should also be more documentation in test functions.
> I am already working on a pull request to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2058: Not able to create flink-streaming-connectors jar

2016-06-02 Thread mrakshay
Github user mrakshay commented on the issue:

https://github.com/apache/flink/pull/2058
  
I am trying to put streaming data to Kinesis hence I am using 
flink-streaming-kinesis connector jar which requires flink-streaming-connector 
jar Error is Could not find artifact 
org.apache.flink:flink-streaming-connectors:pom:1.1-SNAPSHOT ?? Hence I want 
flink-streaming-connector jar..I need to create the jar or I need to resolve 
that dependency...How can I do this ??


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (FLINK-4008) Hello,I am not able to create jar of flink-streaming-connectors ...I am able to create jar of others like twitter,kafka,flume but I am not able to create jar of fl

2016-06-02 Thread Akshay Shingote (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312070#comment-15312070
 ] 

Akshay Shingote edited comment on FLINK-4008 at 6/2/16 10:03 AM:
-

I am trying to put streaming data to Kinesis hence I am using 
flink-streaming-kinesis connector jar which requires flink-streaming-connector 
jar Error is  Could not find artifact 
org.apache.flink:flink-streaming-connectors:pom:1.1-SNAPSHOT ?? Hence I want 
flink-streaming-connector jar..I need to create the jar or I need to resolve 
that dependency...How can I do this ?? 


was (Author: mrakki3110):
I am using flink-streaming-kinesis connector jar which requires 
flink-streaming-connector jar Error is  Could not find artifact 
org.apache.flink:flink-streaming-connectors:pom:1.1-SNAPSHOT ?? Hence I want 
flink-streaming-connector jar..I need to create the jar or I need to resolve 
that dependency...How can I do this ?? 

> Hello,I am not able to create jar of flink-streaming-connectors ...I am able 
> to create jar of others like twitter,kafka,flume but I am not able to create 
> jar of flink-streaming connectors ?? How can I create this jar ??
> ---
>
> Key: FLINK-4008
> URL: https://issues.apache.org/jira/browse/FLINK-4008
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming, Streaming Connectors
>Affects Versions: 1.1.0
>Reporter: Akshay Shingote
>
> I filed an issue here https://github.com/apache/flink/pull/2058 ... I want to 
> know how can we create jar of Flink-Streaming-Connectors ??



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4008) Hello,I am not able to create jar of flink-streaming-connectors ...I am able to create jar of others like twitter,kafka,flume but I am not able to create jar of flink-s

2016-06-02 Thread Akshay Shingote (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312070#comment-15312070
 ] 

Akshay Shingote commented on FLINK-4008:


I am using flink-streaming-kinesis connector jar which requires 
flink-streaming-connector jar Error is  Could not find artifact 
org.apache.flink:flink-streaming-connectors:pom:1.1-SNAPSHOT ?? Hence I want 
flink-streaming-connector jar..I need to create the jar or I need to resolve 
that dependency...How can I do this ?? 

> Hello,I am not able to create jar of flink-streaming-connectors ...I am able 
> to create jar of others like twitter,kafka,flume but I am not able to create 
> jar of flink-streaming connectors ?? How can I create this jar ??
> ---
>
> Key: FLINK-4008
> URL: https://issues.apache.org/jira/browse/FLINK-4008
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming, Streaming Connectors
>Affects Versions: 1.1.0
>Reporter: Akshay Shingote
>
> I filed an issue here https://github.com/apache/flink/pull/2058 ... I want to 
> know how can we create jar of Flink-Streaming-Connectors ??



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4002) [py] Improve testing infraestructure

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312048#comment-15312048
 ] 

ASF GitHub Bot commented on FLINK-4002:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2063
  
you are correct, Verify2 should be modified.

there are 2 test scripts since most operations are running at the same 
time, eating up a lot of memory due to the memory-mapped files. putting them 
all in one can lead to OOM errors.


> [py] Improve testing infraestructure
> 
>
> Key: FLINK-4002
> URL: https://issues.apache.org/jira/browse/FLINK-4002
> Project: Flink
>  Issue Type: Bug
>  Components: Python API
>Affects Versions: 1.0.3
>Reporter: Omar Alvarez
>Priority: Minor
>  Labels: Python, Testing
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The Verify() test function errors out when array elements are missing:
> {code}
> env.generate_sequence(1, 5)\
>  .map(Id()).map_partition(Verify([1,2,3,4], "Sequence")).output()
> {code}
> {quote}
> IndexError: list index out of range
> {quote}
> There should also be more documentation in test functions.
> I am already working on a pull request to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2063: [FLINK-4002] [py] Improve testing infraestructure

2016-06-02 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/2063
  
you are correct, Verify2 should be modified.

there are 2 test scripts since most operations are running at the same 
time, eating up a lot of memory due to the memory-mapped files. putting them 
all in one can lead to OOM errors.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4002) [py] Improve testing infraestructure

2016-06-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15312006#comment-15312006
 ] 

ASF GitHub Bot commented on FLINK-4002:
---

GitHub user omaralvarez opened a pull request:

https://github.com/apache/flink/pull/2063

[FLINK-4002] [py] Improve testing infraestructure

The Verify() test function now does not error out when array elements are 
missing:

```python
env.generate_sequence(1, 5)\
 .map(Id()).map_partition(Verify([1,2,3,4], "Sequence")).output()
```

I have also documented test functions.

While documenting, two questions arise. First, Verify2 function has no use 
as is, performing a `if value in self.expected:` before:

```python
try:
self.expected.remove(value)
except Exception:
raise Exception()
```

Makes this function useless, since it will never raise and exception, if I 
am not mistaken. 

Also, I am not sure why there are two test scripts, `main_test.py` and 
`main_test2.py`. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omaralvarez/flink py_testing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2063.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2063


commit 784a602167f396cdcd1201509d3c122a5a85248f
Author: omaralvarez 
Date:   2016-06-02T09:09:08Z

[FLINK-4002] [py] Improve testing infraestructure




> [py] Improve testing infraestructure
> 
>
> Key: FLINK-4002
> URL: https://issues.apache.org/jira/browse/FLINK-4002
> Project: Flink
>  Issue Type: Bug
>  Components: Python API
>Affects Versions: 1.0.3
>Reporter: Omar Alvarez
>Priority: Minor
>  Labels: Python, Testing
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The Verify() test function errors out when array elements are missing:
> {code}
> env.generate_sequence(1, 5)\
>  .map(Id()).map_partition(Verify([1,2,3,4], "Sequence")).output()
> {code}
> {quote}
> IndexError: list index out of range
> {quote}
> There should also be more documentation in test functions.
> I am already working on a pull request to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request #2063: [FLINK-4002] [py] Improve testing infraestructure

2016-06-02 Thread omaralvarez
GitHub user omaralvarez opened a pull request:

https://github.com/apache/flink/pull/2063

[FLINK-4002] [py] Improve testing infraestructure

The Verify() test function now does not error out when array elements are 
missing:

```python
env.generate_sequence(1, 5)\
 .map(Id()).map_partition(Verify([1,2,3,4], "Sequence")).output()
```

I have also documented test functions.

While documenting, two questions arise. First, Verify2 function has no use 
as is, performing a `if value in self.expected:` before:

```python
try:
self.expected.remove(value)
except Exception:
raise Exception()
```

Makes this function useless, since it will never raise and exception, if I 
am not mistaken. 

Also, I am not sure why there are two test scripts, `main_test.py` and 
`main_test2.py`. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omaralvarez/flink py_testing

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/2063.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2063


commit 784a602167f396cdcd1201509d3c122a5a85248f
Author: omaralvarez 
Date:   2016-06-02T09:09:08Z

[FLINK-4002] [py] Improve testing infraestructure




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4008) Hello,I am not able to create jar of flink-streaming-connectors ...I am able to create jar of others like twitter,kafka,flume but I am not able to create jar of flink-s

2016-06-02 Thread Tzu-Li (Gordon) Tai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15311890#comment-15311890
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-4008:


Hi Akshnay,

flink-streaming-connectors wasn't intended to be a Maven jar, but as an 
aggregator project to hold various connectors.
You'd not be able to build jars from Maven aggregator projects. To do so would 
require non-trivial changes to the Maven build infrastructure for the streaming 
connectors component.

May I ask your use case and why you intend to do so? Also, questions on Flink 
are usually asked via the dev / user mailing lists, and not on JIRA / PRs.
Are you okay with closing the PR and JIRA?

Thanks,
Gordon


> Hello,I am not able to create jar of flink-streaming-connectors ...I am able 
> to create jar of others like twitter,kafka,flume but I am not able to create 
> jar of flink-streaming connectors ?? How can I create this jar ??
> ---
>
> Key: FLINK-4008
> URL: https://issues.apache.org/jira/browse/FLINK-4008
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming, Streaming Connectors
>Affects Versions: 1.1.0
>Reporter: Akshay Shingote
>
> I filed an issue here https://github.com/apache/flink/pull/2058 ... I want to 
> know how can we create jar of Flink-Streaming-Connectors ??



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink issue #2058: Not able to create flink-streaming-connectors jar

2016-06-02 Thread mrakshay
Github user mrakshay commented on the issue:

https://github.com/apache/flink/pull/2058
  
I have filed an issue on Flink ASF JIRA 
[](https://issues.apache.org/jira/issues/?filter=-1)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-4008) Hello,I am not able to create jar of flink-streaming-connectors ...I am able to create jar of others like twitter,kafka,flume but I am not able to create jar of flink-str

2016-06-02 Thread Akshay Shingote (JIRA)
Akshay Shingote created FLINK-4008:
--

 Summary: Hello,I am not able to create jar of 
flink-streaming-connectors ...I am able to create jar of others like 
twitter,kafka,flume but I am not able to create jar of flink-streaming 
connectors ?? How can I create this jar ??
 Key: FLINK-4008
 URL: https://issues.apache.org/jira/browse/FLINK-4008
 Project: Flink
  Issue Type: Bug
  Components: Streaming, Streaming Connectors
Affects Versions: 1.1.0
Reporter: Akshay Shingote


I filed an issue here https://github.com/apache/flink/pull/2058 ... I want to 
know how can we create jar of Flink-Streaming-Connectors ??



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4007) Flink Kafka Consumer throws Null Pointer Exception when using DataStream keyBy

2016-06-02 Thread Akshay Shingote (JIRA)
Akshay Shingote created FLINK-4007:
--

 Summary: Flink Kafka Consumer throws Null Pointer Exception when 
using DataStream keyBy
 Key: FLINK-4007
 URL: https://issues.apache.org/jira/browse/FLINK-4007
 Project: Flink
  Issue Type: Bug
  Components: CEP, DataStream API
Affects Versions: 1.0.1
Reporter: Akshay Shingote


Hello,yesterday I filed an issue here 
http://stackoverflow.com/questions/37568822/flink-kafka-consumer-throws-null-pointer-exception-when-using-datastream-key-by
   I want to know how to resolve this issue ?? I am not finding any other 
example of Flink Kafka Consumer...Thank You



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)