[GitHub] flink pull request:

2015-11-18 Thread mxm
Github user mxm commented on the pull request:


https://github.com/apache/flink/commit/bf29de981c2bcd5cb5d33c68b158c95c8820f43d#commitcomment-14475824
  
Seems like the 1.0-SNAPSHOT binaries are missing. Looking into the problem 
right now.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2233) InputFormat nextRecord Javadocs out of sync

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011290#comment-15011290
 ] 

ASF GitHub Bot commented on FLINK-2233:
---

Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/1374


> InputFormat nextRecord Javadocs out of sync
> ---
>
> Key: FLINK-2233
> URL: https://issues.apache.org/jira/browse/FLINK-2233
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.9, 0.10.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
>
> {code}
> /**
>* Tries to read the next pair from the input. By using the return 
> value invalid records in the
>* input can be skipped.
>* 
>* When this method is called, the input format it guaranteed to be 
> opened.
>* 
>* @param reuse Object that may be reused.
>* @return Indicates whether the record could be successfully read. A 
> return value of true
>* indicates that the read was successful, a return value of 
> false indicates that the
>* current record was not read successfully and should be 
> skipped.
>* 
>* @throws IOException Thrown, if an I/O error occurred.
>*/
>   OT nextRecord(OT reuse) throws IOException;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Fix JavaDoc of ElasticsearchSink

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1367


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1945][py] Python Tests less verbose

2015-11-18 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45221988
  
--- Diff: flink-libraries/flink-python/pom.xml ---
@@ -48,6 +48,14 @@ under the License.
 
 
 
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.17
+
+1
--- End diff --

uhh...i think something broke otherwise. Let me try it real quick and see 
what happens.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1945][py] Python Tests less verbose

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45224052
  
--- Diff: 
flink-libraries/flink-python/src/test/python/org/apache/flink/python/api/test_type_deduction.py
 ---
@@ -58,5 +58,5 @@
 
 msg.output()
--- End diff --

Do we need this output here? Is it important to decide whether the test 
passed?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1945) Make python tests less verbose

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011360#comment-15011360
 ] 

ASF GitHub Bot commented on FLINK-1945:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1376#issuecomment-157772432
  
Changes look good. I had only a minor comment. Besides that +1 for merging.


> Make python tests less verbose
> --
>
> Key: FLINK-1945
> URL: https://issues.apache.org/jira/browse/FLINK-1945
> Project: Flink
>  Issue Type: Improvement
>  Components: Python API, Tests
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> Currently, the python tests print a lot of log messages to stdout. 
> Furthermore there seems to be some println statements which clutter the 
> console output. I think that these log messages are not required for the 
> tests and thus should be suppressed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1945][py] Python Tests less verbose

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1376#issuecomment-157772432
  
Changes look good. I had only a minor comment. Besides that +1 for merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-3039) Trigger KeyValueState cannot be Scala Int

2015-11-18 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-3039:


 Summary: Trigger KeyValueState cannot be Scala Int
 Key: FLINK-3039
 URL: https://issues.apache.org/jira/browse/FLINK-3039
 Project: Flink
  Issue Type: Improvement
Affects Versions: 0.10.0
Reporter: Fabian Hueske
 Fix For: 1.0.0


It is not possible to use a Scala Int as a KeyValueState in a {{Trigger}} 
function because {{TriggerContext.getKeyValueState}} requires that the state is 
Serializable.

{code}
override def onElement(e: (Int, Short), t: Long, w: TimeWindow, ctx: 
TriggerContext): TriggerResult = {
  val cnt = ctx.getKeyValueState[Int]("cnt", 0)
  ...
}
{code}

will fail with 
{code}
Error:(116, 44) type arguments [Int] do not conform to method 
getKeyValueState's type parameter bounds [S <: java.io.Serializable]
  val cnt = ctx.getKeyValueState[Int]("cnt", 0)
   ^
{code}

If I change {{Int}} to {{Integer}}, the code compiles.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2989] job cancel button doesn't work on...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1344


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1945) Make python tests less verbose

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011407#comment-15011407
 ] 

ASF GitHub Bot commented on FLINK-1945:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45227781
  
--- Diff: 
flink-libraries/flink-python/src/test/python/org/apache/flink/python/api/test_type_deduction.py
 ---
@@ -58,5 +58,5 @@
 
 msg.output()
--- End diff --

it's not needed, but the JavaProgramTestBase expects a job to be run.


> Make python tests less verbose
> --
>
> Key: FLINK-1945
> URL: https://issues.apache.org/jira/browse/FLINK-1945
> Project: Flink
>  Issue Type: Improvement
>  Components: Python API, Tests
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> Currently, the python tests print a lot of log messages to stdout. 
> Furthermore there seems to be some println statements which clutter the 
> console output. I think that these log messages are not required for the 
> tests and thus should be suppressed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3002) Add an EitherType to the Java API

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011428#comment-15011428
 ] 

ASF GitHub Bot commented on FLINK-3002:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1371#issuecomment-157784970
  
Integrating it with the TypeExtractor means that the APIs recognize the 
type and choose the type info properly.

I think @twalthr is probably the best to give pointers on how to integrate 
this with the TypeExtractor.

BTW: Would be nice if we could make custom type integration easier by 
defining an interface/static method that classes can implement to create their 
own type information. That gives users an easy extension point.


> Add an EitherType to the Java API
> -
>
> Key: FLINK-3002
> URL: https://issues.apache.org/jira/browse/FLINK-3002
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Vasia Kalavri
> Fix For: 1.0.0
>
>
> Either types are recurring patterns and should be serialized efficiently, so 
> it makes sense to add them to the core Java API.
> Since Java does not have such a type as of Java 8, we would need to add our 
> own version.
> The Scala API handles the Scala Either Type already efficiently. I would not 
> use the Scala Either Type in the Java API, since we are trying to get the 
> {{flink-java}} project "Scala free" for people that don't use Scala and o not 
> want to worry about Scala version matches and mismatches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1945][py] Python Tests less verbose

2015-11-18 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45227781
  
--- Diff: 
flink-libraries/flink-python/src/test/python/org/apache/flink/python/api/test_type_deduction.py
 ---
@@ -58,5 +58,5 @@
 
 msg.output()
--- End diff --

it's not needed, but the JavaProgramTestBase expects a job to be run.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2897] [runtime] Use distinct initial in...

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/1292#discussion_r45231684
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/shipping/OutputEmitter.java
 ---
@@ -143,16 +148,24 @@ public OutputEmitter(ShipStrategyType strategy, 
TypeComparator comparator, Pa

// 

 
+   private int[] forward() {
+   return this.channels;
+   }
+
private int[] robin(int numberOfChannels) {
-   if (this.channels == null || this.channels.length != 1) {
-   this.channels = new int[1];
+   int nextChannel = this.nextChannelToSendTo;
+
+   if (nextChannel >= numberOfChannels) {
+   if (nextChannel == numberOfChannels) {
+   nextChannel = 0;
--- End diff --

Shouldn't this case be also covered by `nextChannel %= numberOfChannels`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-3041) Twitter Streaming Description section of Streaming Programming guide refers to an incorrect example 'TwitterLocal'

2015-11-18 Thread Suneel Marthi (JIRA)
Suneel Marthi created FLINK-3041:


 Summary: Twitter Streaming Description section of Streaming 
Programming guide refers to an incorrect example 'TwitterLocal'
 Key: FLINK-3041
 URL: https://issues.apache.org/jira/browse/FLINK-3041
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Examples, Streaming
Affects Versions: 0.10.0
Reporter: Suneel Marthi
Assignee: Suneel Marthi
Priority: Minor
 Fix For: 0.10.1


Twitter Streaming Description section of Streaming Programming guide refers to 
an incorrect example 'TwitterLocal', it should be 'TwitterStream'.  Fix other 
typos in the Twitter streaming description and code cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2989] job cancel button doesn't work on...

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1344#issuecomment-157775857
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2989) Job Cancel button doesn't work on Yarn

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011381#comment-15011381
 ] 

ASF GitHub Bot commented on FLINK-2989:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1344#issuecomment-157775857
  
LGTM


> Job Cancel button doesn't work on Yarn
> --
>
> Key: FLINK-2989
> URL: https://issues.apache.org/jira/browse/FLINK-2989
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Reporter: Sachin Goel
>Assignee: Maximilian Michels
>
> The newly added Cancel button doesn't work on Yarn, when accessing via Yarn 
> Web Application proxy. It works fine when we're directly on the web monitor.
> The reason is Yarn doesn't allow DELETE requests to the AM yet. It should be 
> enabled in 2.8.0. [YARN-2031, YARN-2084.]
> A workaround for now can be to use a {{GET}} method instead of {{DELETE}}, 
> but that breaks the conventions of REST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: fix pos assignment of instance

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1345#issuecomment-157783778
  
Thanks for spotting the error @fangmu. 

I assume that this error never occurred because we rarely start multiple 
instances on the same host. It is a crucial fix imho and should be part of 
`0.10.1`. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2897) Use distinct initial indices for OutputEmitter round-robin

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011491#comment-15011491
 ] 

ASF GitHub Bot commented on FLINK-2897:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/1292#discussion_r45231684
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/operators/shipping/OutputEmitter.java
 ---
@@ -143,16 +148,24 @@ public OutputEmitter(ShipStrategyType strategy, 
TypeComparator comparator, Pa

// 

 
+   private int[] forward() {
+   return this.channels;
+   }
+
private int[] robin(int numberOfChannels) {
-   if (this.channels == null || this.channels.length != 1) {
-   this.channels = new int[1];
+   int nextChannel = this.nextChannelToSendTo;
+
+   if (nextChannel >= numberOfChannels) {
+   if (nextChannel == numberOfChannels) {
+   nextChannel = 0;
--- End diff --

Shouldn't this case be also covered by `nextChannel %= numberOfChannels`?


> Use distinct initial indices for OutputEmitter round-robin
> --
>
> Key: FLINK-2897
> URL: https://issues.apache.org/jira/browse/FLINK-2897
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: 0.10.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>
> Currently, when performing a round-robin partitioning each task will 
> sequentially partition starting with partition "1". This is fine in the usual 
> case where the number of partitioned objects greatly exceeds the number of 
> channels. However, in the case where the number of objects is relatively few 
> (each, perhaps, requiring a large computation or access to an external 
> system) it would be much better to begin partitioning at distinct indices 
> (the task index).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3040) Add docs describing how to configure State Backends

2015-11-18 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3040:
---

 Summary: Add docs describing how to configure State Backends
 Key: FLINK-3040
 URL: https://issues.apache.org/jira/browse/FLINK-3040
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 0.10.0
Reporter: Stephan Ewen
Assignee: Stephan Ewen
 Fix For: 1.0.0, 0.10.1






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2989) Job Cancel button doesn't work on Yarn

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011505#comment-15011505
 ] 

ASF GitHub Bot commented on FLINK-2989:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1344


> Job Cancel button doesn't work on Yarn
> --
>
> Key: FLINK-2989
> URL: https://issues.apache.org/jira/browse/FLINK-2989
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Reporter: Sachin Goel
>Assignee: Maximilian Michels
>
> The newly added Cancel button doesn't work on Yarn, when accessing via Yarn 
> Web Application proxy. It works fine when we're directly on the web monitor.
> The reason is Yarn doesn't allow DELETE requests to the AM yet. It should be 
> enabled in 2.8.0. [YARN-2031, YARN-2084.]
> A workaround for now can be to use a {{GET}} method instead of {{DELETE}}, 
> but that breaks the conventions of REST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3041) Twitter Streaming Description section of Streaming Programming guide refers to an incorrect example 'TwitterLocal'

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011526#comment-15011526
 ] 

ASF GitHub Bot commented on FLINK-3041:
---

GitHub user smarthi opened a pull request:

https://github.com/apache/flink/pull/1379

FLINK-3041: Twitter Streaming Description section of Streaming Programming 
guide refers to an incorrect example 'TwitterLocal'



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/smarthi/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1379.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1379


commit a3ce1557080d716aa415615466cc64c0797a3ea5
Author: smarthi 
Date:   2015-11-18T17:48:38Z

FLINK-3041: Twitter Streaming Description section of Streaming Programming 
guide refers to an incorrect example 'TwitterLocal'




> Twitter Streaming Description section of Streaming Programming guide refers 
> to an incorrect example 'TwitterLocal'
> --
>
> Key: FLINK-3041
> URL: https://issues.apache.org/jira/browse/FLINK-3041
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Examples, Streaming
>Affects Versions: 0.10.0
>Reporter: Suneel Marthi
>Assignee: Suneel Marthi
>Priority: Minor
> Fix For: 0.10.1
>
>
> Twitter Streaming Description section of Streaming Programming guide refers 
> to an incorrect example 'TwitterLocal', it should be 'TwitterStream'.  Fix 
> other typos in the Twitter streaming description and code cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3021) Job submission times out due to classloading issue on JobManager

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011219#comment-15011219
 ] 

ASF GitHub Bot commented on FLINK-3021:
---

Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1368#issuecomment-157752423
  
Very neat how you created a test JAR file with the custom InputFormat.

+1 for merging.


> Job submission times out due to classloading issue on JobManager
> 
>
> Key: FLINK-3021
> URL: https://issues.apache.org/jira/browse/FLINK-3021
> Project: Flink
>  Issue Type: Bug
>  Components: JobManager
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Assignee: Till Rohrmann
>Priority: Critical
>
> A user reported the following issue when submitting a very simple job using 
> the {{DataStream}} API:
> {code}
> Caused by: org.apache.flink.runtime.client.JobExecutionException: 
> Communication with JobManager failed: Job submission to the JobManager timed 
> out.
>   at 
> org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:141)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:368)
>   ... 13 more
> Caused by: 
> org.apache.flink.runtime.client.JobClientActorSubmissionTimeoutException: Job 
> submission to the JobManager timed out.
>   at 
> org.apache.flink.runtime.client.JobClientActor.handleMessage(JobClientActor.java:255)
>   at 
> org.apache.flink.runtime.akka.FlinkUntypedActor.handleLeaderSessionID(FlinkUntypedActor.java:88)
>   at 
> org.apache.flink.runtime.akka.FlinkUntypedActor.onReceive(FlinkUntypedActor.java:68)
>   at 
> akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
> {code}
> The problem is that akka can not deserialize the job submit message on the 
> JobManager. From the logs, the issue becomes apparent:
> {code}
> 22:14:12,964 DEBUG akka.serialization.Serialization(akka://flink) 
>- Using serializer[akka.serialization.JavaSerializer] for message 
> [akka.actor.ActorIdentity]
> 22:14:12,995 DEBUG akka.serialization.Serialization(akka://flink) 
>- Using serializer[akka.serialization.JavaSerializer] for message 
> [java.lang.Integer]
> 22:14:13,007 DEBUG org.apache.flink.runtime.blob.BlobServerConnection 
>- Received PUT request for content addressable BLOB
> 22:14:13,134 ERROR akka.remote.EndpointWriter 
>- AssociationError [akka.tcp://flink@127.0.0.1:6123] <- 
> [akka.tcp://flink@127.0.0.1:58424]: Error [com.dataartisans.SimpleEntity] [
> java.lang.ClassNotFoundException: com.dataartisans.SimpleEntity
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:274)
>   at java.io.ObjectInputStream.resolveClass(ObjectInputStream.java:625)
>   at 
> akka.util.ClassLoaderObjectInputStream.resolveClass(ClassLoaderObjectInputStream.scala:19)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1612)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
>   at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1483)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1333)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>   at java.util.HashMap.readObject(HashMap.java:1180)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> 

[GitHub] flink pull request: [FLINK-2488][FLINK-2496] Expose Task Manager c...

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157753581
  
Sorry for dropping this. The feature is still interesting. I have the 
feeling it can probably be implemented a bit simpler...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2937) Typo in Quickstart->Scala API->Alternative Build Tools: SBT

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011284#comment-15011284
 ] 

ASF GitHub Bot commented on FLINK-2937:
---

Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/1373#issuecomment-157763931
  
merging this.


> Typo in Quickstart->Scala API->Alternative Build Tools: SBT
> ---
>
> Key: FLINK-2937
> URL: https://issues.apache.org/jira/browse/FLINK-2937
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 0.9.0, 0.10.0
>Reporter: Theodore Vasiloudis
>Assignee: Chesnay Schepler
>Priority: Trivial
>  Labels: docs, spelling
> Fix For: 1.0.0
>
>
> Link to text: 
> https://ci.apache.org/projects/flink/flink-docs-master/quickstart/scala_api_quickstart.html#alternative-build-tools-sbt
> Second paragraph reads:
> {quote}
> Now the application can be executed by sbt run. By default SBT runs an 
> application in the same JVM itself is running in. This can lead to *lass 
> loading* issues with Flink. To avoid these, append the following line to 
> build.sbt:
> {quote}
> Not sure what the author intended to write here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2249) ExecutionEnvironment: Ignore calls to execute() if no data sinks defined

2015-11-18 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler resolved FLINK-2249.
-
Resolution: Not A Problem

> ExecutionEnvironment: Ignore calls to execute() if no data sinks defined
> 
>
> Key: FLINK-2249
> URL: https://issues.apache.org/jira/browse/FLINK-2249
> Project: Flink
>  Issue Type: Improvement
>  Components: Java API, Scala API
>Affects Versions: 0.9
>Reporter: Maximilian Michels
>Assignee: Chesnay Schepler
>
> The basic skeleton of a Flink program looks like this: 
> {code}
> ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
> // bootstrap DataSet
> DataSet<..> ds = env.fromElements(1,2,3,4);
> // perform transformations
> ..
> // define sinks, e.g.
> ds.writeToTextFile("/some/path");
> // execute
> env.execute()
> {code}
> First thing users do is to change {{ds.writeToTextFile("/some/path");}} into 
> {{ds.print();}}. But that fails with an Exception ("No new data sinks 
> defined...").
> In FLINK-2026 we made this exception message easier to understand. However, 
> users still don't understand what is happening. Especially because they see 
> Flink executing and then failing.
> I propose to ignore calls to execute() when no sinks are defined. Instead, we 
> should just print a warning: "Detected call to execute without any data 
> sinks. Not executing."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3005) Commons-collections object deserialization remote command execution vulnerability

2015-11-18 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011226#comment-15011226
 ] 

Stephan Ewen commented on FLINK-3005:
-

Important issue - we need to banish that from our classpath in any way.

Right now, I think only one class makes use of that: {{JoinHashMap}} 
(https://github.com/apache/flink/blob/6e0e67d2e0d5180d6fba492e8ab9cc8fb18fdf68/flink-core/src/main/java/org/apache/flink/api/common/operators/util/JoinHashMap.java)

We need to migrate that (to a custom implementation) and then add a maven 
plugin to enforce that this is not packaged into the distribution.

[~tedyu] Would you be interested in this?

> Commons-collections object deserialization remote command execution 
> vulnerability
> -
>
> Key: FLINK-3005
> URL: https://issues.apache.org/jira/browse/FLINK-3005
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
> TL;DR: If you have commons-collections on your classpath and accept and 
> process Java object serialization data, then you may have an exploitable 
> remote command execution vulnerability.
> Brief search in code base for ObjectInputStream reveals several places where 
> the vulnerability exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3005) Commons-collections object deserialization remote command execution vulnerability

2015-11-18 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011251#comment-15011251
 ] 

Ted Yu commented on FLINK-3005:
---

Would update to Commons Collections 3.2.2 alleviate the remote code execution 
vulnerability ?

See https://issues.apache.org/jira/browse/COLLECTIONS-580

> Commons-collections object deserialization remote command execution 
> vulnerability
> -
>
> Key: FLINK-3005
> URL: https://issues.apache.org/jira/browse/FLINK-3005
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
> TL;DR: If you have commons-collections on your classpath and accept and 
> process Java object serialization data, then you may have an exploitable 
> remote command execution vulnerability.
> Brief search in code base for ObjectInputStream reveals several places where 
> the vulnerability exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2233) InputFormat nextRecord Javadocs out of sync

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011283#comment-15011283
 ] 

ASF GitHub Bot commented on FLINK-2233:
---

Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/1374#issuecomment-157763866
  
merging this.


> InputFormat nextRecord Javadocs out of sync
> ---
>
> Key: FLINK-2233
> URL: https://issues.apache.org/jira/browse/FLINK-2233
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.9, 0.10.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
>
> {code}
> /**
>* Tries to read the next pair from the input. By using the return 
> value invalid records in the
>* input can be skipped.
>* 
>* When this method is called, the input format it guaranteed to be 
> opened.
>* 
>* @param reuse Object that may be reused.
>* @return Indicates whether the record could be successfully read. A 
> return value of true
>* indicates that the read was successful, a return value of 
> false indicates that the
>* current record was not read successfully and should be 
> skipped.
>* 
>* @throws IOException Thrown, if an I/O error occurred.
>*/
>   OT nextRecord(OT reuse) throws IOException;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2937] Fix Typo in Scala Quickstart Guid...

2015-11-18 Thread zentol
Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/1373#issuecomment-157763931
  
merging this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3005) Commons-collections object deserialization remote command execution vulnerability

2015-11-18 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011282#comment-15011282
 ] 

Stephan Ewen commented on FLINK-3005:
-

Possibly, I am not an expert on this ;-)

> Commons-collections object deserialization remote command execution 
> vulnerability
> -
>
> Key: FLINK-3005
> URL: https://issues.apache.org/jira/browse/FLINK-3005
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>
> http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/
> TL;DR: If you have commons-collections on your classpath and accept and 
> process Java object serialization data, then you may have an exploitable 
> remote command execution vulnerability.
> Brief search in code base for ObjectInputStream reveals several places where 
> the vulnerability exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2233] Update InputFormat nextRecord jav...

2015-11-18 Thread zentol
Github user zentol commented on the pull request:

https://github.com/apache/flink/pull/1374#issuecomment-157763866
  
merging this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1945) Make python tests less verbose

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011324#comment-15011324
 ] 

ASF GitHub Bot commented on FLINK-1945:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45222496
  
--- Diff: flink-libraries/flink-python/pom.xml ---
@@ -48,6 +48,14 @@ under the License.
 
 
 
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.17
+
+1
--- End diff --

scratch that, works without it as well. gonna remove it.


> Make python tests less verbose
> --
>
> Key: FLINK-1945
> URL: https://issues.apache.org/jira/browse/FLINK-1945
> Project: Flink
>  Issue Type: Improvement
>  Components: Python API, Tests
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> Currently, the python tests print a lot of log messages to stdout. 
> Furthermore there seems to be some println statements which clutter the 
> console output. I think that these log messages are not required for the 
> tests and thus should be suppressed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2488) Expose attemptNumber in RuntimeContext

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011227#comment-15011227
 ] 

ASF GitHub Bot commented on FLINK-2488:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157753581
  
Sorry for dropping this. The feature is still interesting. I have the 
feeling it can probably be implemented a bit simpler...


> Expose attemptNumber in RuntimeContext
> --
>
> Key: FLINK-2488
> URL: https://issues.apache.org/jira/browse/FLINK-2488
> Project: Flink
>  Issue Type: Improvement
>  Components: JobManager, TaskManager
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
>Priority: Minor
>
> It would be nice to expose the attemptNumber of a task in the 
> {{RuntimeContext}}. 
> This would allow user code to behave differently in restart scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2233) InputFormat nextRecord Javadocs out of sync

2015-11-18 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011292#comment-15011292
 ] 

Chesnay Schepler commented on FLINK-2233:
-

Fixed in 7beb0110a5c7da5b17d03e3095cc12df28fbd8f9.

> InputFormat nextRecord Javadocs out of sync
> ---
>
> Key: FLINK-2233
> URL: https://issues.apache.org/jira/browse/FLINK-2233
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.9, 0.10.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
>
> {code}
> /**
>* Tries to read the next pair from the input. By using the return 
> value invalid records in the
>* input can be skipped.
>* 
>* When this method is called, the input format it guaranteed to be 
> opened.
>* 
>* @param reuse Object that may be reused.
>* @return Indicates whether the record could be successfully read. A 
> return value of true
>* indicates that the read was successful, a return value of 
> false indicates that the
>* current record was not read successfully and should be 
> skipped.
>* 
>* @throws IOException Thrown, if an I/O error occurred.
>*/
>   OT nextRecord(OT reuse) throws IOException;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-2937) Typo in Quickstart->Scala API->Alternative Build Tools: SBT

2015-11-18 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-2937.
---
Resolution: Fixed

> Typo in Quickstart->Scala API->Alternative Build Tools: SBT
> ---
>
> Key: FLINK-2937
> URL: https://issues.apache.org/jira/browse/FLINK-2937
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 0.9.0, 0.10.0
>Reporter: Theodore Vasiloudis
>Assignee: Chesnay Schepler
>Priority: Trivial
>  Labels: docs, spelling
> Fix For: 1.0.0
>
>
> Link to text: 
> https://ci.apache.org/projects/flink/flink-docs-master/quickstart/scala_api_quickstart.html#alternative-build-tools-sbt
> Second paragraph reads:
> {quote}
> Now the application can be executed by sbt run. By default SBT runs an 
> application in the same JVM itself is running in. This can lead to *lass 
> loading* issues with Flink. To avoid these, append the following line to 
> build.sbt:
> {quote}
> Not sure what the author intended to write here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3020][streaming] set number of task slo...

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1360#issuecomment-157766781
  
I think the maximum parallelism of the job should be taken if the user has 
not specified a different parallelism than the default one (`-1`). I think 
that's also how the batch part does it (here the `LocalExecutor` has a field 
`taskManagerNumSlots` which can be set). Besides that, looks good to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3020) Local streaming execution: set number of task manager slots to the maximum parallelism

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011311#comment-15011311
 ] 

ASF GitHub Bot commented on FLINK-3020:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1360#issuecomment-157766781
  
I think the maximum parallelism of the job should be taken if the user has 
not specified a different parallelism than the default one (`-1`). I think 
that's also how the batch part does it (here the `LocalExecutor` has a field 
`taskManagerNumSlots` which can be set). Besides that, looks good to merge.


> Local streaming execution: set number of task manager slots to the maximum 
> parallelism
> --
>
> Key: FLINK-3020
> URL: https://issues.apache.org/jira/browse/FLINK-3020
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Affects Versions: 0.10.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: 1.0.0, 0.10.1
>
>
> Quite an inconvenience is the local execution configuration behavior. It sets 
> the number of task slots of the mini cluster to the default parallelism. This 
> causes problem if you use {{setParallelism(parallelism)}} on an operator and 
> set a parallelism larger than the default parallelism.
> {noformat}
> Caused by: 
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: 
> Not enough free slots available to run the job. You can decrease the operator 
> parallelism or increase the number of slots per TaskManager in the 
> configuration. Task to schedule: < Attempt #0 (Flat Map (9/100)) @ 
> (unassigned) - [SCHEDULED] > with groupID < fa7240ee1fed08bd7e6278899db3e838 
> > in sharing group < SlotSharingGroup [f3d578e9819be9c39ceee86cf5eb8c08, 
> 8fa330746efa1d034558146e4604d0b4, fa7240ee1fed08bd7e6278899db3e838] >. 
> Resources available to scheduler: Number of instances=1, total number of 
> slots=8, available slots=0
>   at 
> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:256)
>   at 
> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
>   at 
> org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
>   at 
> org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
>   at 
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
>   at 
> org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:982)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   ... 2 more
> {noformat}
> I propose to change this behavior to setting the number of task slots to the 
> maximum parallelism present in the user program.
> What do you think?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1945][py] Python Tests less verbose

2015-11-18 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45222496
  
--- Diff: flink-libraries/flink-python/pom.xml ---
@@ -48,6 +48,14 @@ under the License.
 
 
 
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.17
+
+1
--- End diff --

scratch that, works without it as well. gonna remove it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-3036) Gelly's Graph.fromCsvReader method returns wrongly parameterized Graph

2015-11-18 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-3036.

Resolution: Fixed

Fixed via b1cf626d8ef569479d8bbc5edd5d3331193f5867

> Gelly's Graph.fromCsvReader method returns wrongly parameterized Graph
> --
>
> Key: FLINK-3036
> URL: https://issues.apache.org/jira/browse/FLINK-3036
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly
>Affects Versions: 0.10.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>
> The Scala method {{Graph.fromCsvReader}} of Gelly returns a wrongly typed 
> {{Graph}} instance. The problem is that no return type has been explicitly 
> defined for the method. Additionally, the method returns fundamentally 
> incompatible types depending on the given parameters. So for example, the 
> method can return a {{Graph[Long, Long, Long]}} if a vertex and edge file is 
> specified (in this case with value type {{Long}}). If the vertex file is not 
> specified and neither a vertex value initializer, then the return type is 
> {{Graph[Long, NullValue, Long]}}. Since {{NullValue}} and {{Long}} have 
> nothing in common, Scala's type inference infers that the {{fromCsvReader}} 
> method must have a return type {{Graph[Long, t  >: Long with NullValue, 
> Long]}} with {{t}} being a supertype of {{Long with NullValue}}. This type is 
> not useful at all, since there is no such type. As a consequence, the user 
> has to cast the resulting {{Graph}} to have either the type {{Graph[Long, 
> NullValue, Long]}} or {{Graph[Long, Long, Long]}} if he wants to do something 
> more elaborate than just collecting the edges for example. 
> This can be especially confusing because one usually writes something like
> {code}
> val graph = Graph.fromCsvReader[Long, Double, Double](...)
> graph.run(new PageRank(...))
> {code}
> and does not see that the type of {{graph}} is {{Graph[Long, t >: Double with 
> NullValue, u >: Double with NullValue}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3021] Fix class loading issue for strea...

2015-11-18 Thread mxm
Github user mxm commented on the pull request:

https://github.com/apache/flink/pull/1368#issuecomment-157752423
  
Very neat how you created a test JAR file with the custom InputFormat.

+1 for merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2860] [ml] [docs] The mlr object from t...

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1310#issuecomment-157756816
  
Looks correct, +1 to merge


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: Fix JavaDoc of ElasticsearchSink

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1367#issuecomment-157763022
  
CI failure is unrelated. Will merge this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (FLINK-2233) InputFormat nextRecord Javadocs out of sync

2015-11-18 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-2233.
---
Resolution: Fixed

> InputFormat nextRecord Javadocs out of sync
> ---
>
> Key: FLINK-2233
> URL: https://issues.apache.org/jira/browse/FLINK-2233
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.9, 0.10.0
>Reporter: Ufuk Celebi
>Assignee: Chesnay Schepler
>
> {code}
> /**
>* Tries to read the next pair from the input. By using the return 
> value invalid records in the
>* input can be skipped.
>* 
>* When this method is called, the input format it guaranteed to be 
> opened.
>* 
>* @param reuse Object that may be reused.
>* @return Indicates whether the record could be successfully read. A 
> return value of true
>* indicates that the read was successful, a return value of 
> false indicates that the
>* current record was not read successfully and should be 
> skipped.
>* 
>* @throws IOException Thrown, if an I/O error occurred.
>*/
>   OT nextRecord(OT reuse) throws IOException;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2233] Update InputFormat nextRecord jav...

2015-11-18 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/1374


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3036] [gelly] Fix Graph.fromCsvReader m...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1370


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-3013) Incorrect package declaration in GellyScalaAPICompletenessTest.scala

2015-11-18 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-3013.
--
Resolution: Fixed

Fixed via ee64892591de67a60023fd202704370cd18d1b26

> Incorrect package declaration in GellyScalaAPICompletenessTest.scala
> 
>
> Key: FLINK-3013
> URL: https://issues.apache.org/jira/browse/FLINK-3013
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly, Scala API
>Affects Versions: 0.10.0
>Reporter: Suneel Marthi
>Assignee: Suneel Marthi
> Fix For: 1.0.0
>
>
> Incorrect package declaration in GellyScalaAPICompletenessTest.scala, 
> presently reads as:
> org.apache.flink.streaming.api.scala => org.apache.flink.graph.scala.test
> Also fix failing Gelly-Scala tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3013) Incorrect package declaration in GellyScalaAPICompletenessTest.scala

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011148#comment-15011148
 ] 

ASF GitHub Bot commented on FLINK-3013:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1356


> Incorrect package declaration in GellyScalaAPICompletenessTest.scala
> 
>
> Key: FLINK-3013
> URL: https://issues.apache.org/jira/browse/FLINK-3013
> Project: Flink
>  Issue Type: Bug
>  Components: Gelly, Scala API
>Affects Versions: 0.10.0
>Reporter: Suneel Marthi
>Assignee: Suneel Marthi
> Fix For: 1.0.0
>
>
> Incorrect package declaration in GellyScalaAPICompletenessTest.scala, 
> presently reads as:
> org.apache.flink.streaming.api.scala => org.apache.flink.graph.scala.test
> Also fix failing Gelly-Scala tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2249] Ignore calls to execute without s...

2015-11-18 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/1375


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2860) The mlr object from the FlinkML Getting Started code example uses an undefined argument

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011245#comment-15011245
 ] 

ASF GitHub Bot commented on FLINK-2860:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1310#issuecomment-157756816
  
Looks correct, +1 to merge


> The mlr object from the FlinkML Getting Started code example uses an 
> undefined argument
> ---
>
> Key: FLINK-2860
> URL: https://issues.apache.org/jira/browse/FLINK-2860
> Project: Flink
>  Issue Type: Improvement
>  Components: Machine Learning Library
>Affects Versions: 0.9.1
>Reporter: Theodore Vasiloudis
>Assignee: Chiwan Park
>Priority: Trivial
>  Labels: ML
>
> The [getting started guide code 
> example|https://ci.apache.org/projects/flink/flink-docs-master/libs/ml/#getting-started]
>  uses the following code:
> {code}
> val trainingData: DataSet[LabeledVector] = ...
> val testingData: DataSet[Vector] = ...
> val mlr = MultipleLinearRegression()
>   .setStepsize(1.0)
>   .setIterations(100)
>   .setConvergenceThreshold(0.001)
> mlr.fit(trainingData, parameters)
> {code}
> The call to {{mlr.fit()}} uses a {{parameters}} argument that is unnecessary, 
> we should remove that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request:

2015-11-18 Thread mxm
Github user mxm commented on the pull request:


https://github.com/apache/flink/commit/bf29de981c2bcd5cb5d33c68b158c95c8820f43d#commitcomment-14475546
  
@tillrohrmann @alexeyegorov Actually there are 1.0-SNAPSHOT binaries 
available. See 
http://flink.apache.org/contribute-code.html#snapshots-nightly-builds

Please keep in mind that these are targeted at developers and not 
officially released.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-1945][py] Python Tests less verbose

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45221471
  
--- Diff: flink-libraries/flink-python/pom.xml ---
@@ -48,6 +48,14 @@ under the License.
 
 
 
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.17
+
+1
--- End diff --

Why do we execute the Python tests sequentially?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2937] Fix Typo in Scala Quickstart Guid...

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1373


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3002) Add an EitherType to the Java API

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011315#comment-15011315
 ] 

ASF GitHub Bot commented on FLINK-3002:
---

Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/1371#issuecomment-157767153
  
Thanks a lot for the review. I've updated the PR.
@gyfora @StephanEwen what do you mean by adding this to the type extractor? 
What do I need to do?
Also, any comment on how to document this (see the PR description)?

Thanks!


> Add an EitherType to the Java API
> -
>
> Key: FLINK-3002
> URL: https://issues.apache.org/jira/browse/FLINK-3002
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Vasia Kalavri
> Fix For: 1.0.0
>
>
> Either types are recurring patterns and should be serialized efficiently, so 
> it makes sense to add them to the core Java API.
> Since Java does not have such a type as of Java 8, we would need to add our 
> own version.
> The Scala API handles the Scala Either Type already efficiently. I would not 
> use the Scala Either Type in the Java API, since we are trying to get the 
> {{flink-java}} project "Scala free" for people that don't use Scala and o not 
> want to worry about Scala version matches and mismatches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3002] Add Either type to the Java API

2015-11-18 Thread vasia
Github user vasia commented on the pull request:

https://github.com/apache/flink/pull/1371#issuecomment-157767153
  
Thanks a lot for the review. I've updated the PR.
@gyfora @StephanEwen what do you mean by adding this to the type extractor? 
What do I need to do?
Also, any comment on how to document this (see the PR description)?

Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-1945) Make python tests less verbose

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011313#comment-15011313
 ] 

ASF GitHub Bot commented on FLINK-1945:
---

Github user tillrohrmann commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45221471
  
--- Diff: flink-libraries/flink-python/pom.xml ---
@@ -48,6 +48,14 @@ under the License.
 
 
 
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.17
+
+1
--- End diff --

Why do we execute the Python tests sequentially?


> Make python tests less verbose
> --
>
> Key: FLINK-1945
> URL: https://issues.apache.org/jira/browse/FLINK-1945
> Project: Flink
>  Issue Type: Improvement
>  Components: Python API, Tests
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> Currently, the python tests print a lot of log messages to stdout. 
> Furthermore there seems to be some println statements which clutter the 
> console output. I think that these log messages are not required for the 
> tests and thus should be suppressed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-1945) Make python tests less verbose

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011321#comment-15011321
 ] 

ASF GitHub Bot commented on FLINK-1945:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45221988
  
--- Diff: flink-libraries/flink-python/pom.xml ---
@@ -48,6 +48,14 @@ under the License.
 
 
 
+
+org.apache.maven.plugins
+maven-surefire-plugin
+2.17
+
+1
--- End diff --

uhh...i think something broke otherwise. Let me try it real quick and see 
what happens.


> Make python tests less verbose
> --
>
> Key: FLINK-1945
> URL: https://issues.apache.org/jira/browse/FLINK-1945
> Project: Flink
>  Issue Type: Improvement
>  Components: Python API, Tests
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Minor
>
> Currently, the python tests print a lot of log messages to stdout. 
> Furthermore there seems to be some println statements which clutter the 
> console output. I think that these log messages are not required for the 
> tests and thus should be suppressed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3042) Define a way to let types create their own TypeInformation

2015-11-18 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3042:
---

 Summary: Define a way to let types create their own TypeInformation
 Key: FLINK-3042
 URL: https://issues.apache.org/jira/browse/FLINK-3042
 Project: Flink
  Issue Type: Improvement
  Components: Core
Affects Versions: 0.10.0
Reporter: Stephan Ewen
 Fix For: 1.0.0


Currently, introducing new Types that should have specific TypeInformation 
requires
  - Either integration with the TypeExtractor
  - Or manually constructing the TypeInformation (potentially at every place) 
and using type hints everywhere.

I propose to add a way to allow classes to create their own TypeInformation 
(like a static method "createTypeInfo()").

To support generic nested types (like Optional / Either), the type extractor 
would provide a Map of what generic variables map to what types (deduced from 
the input). The class can use that to create the correct nested TypeInformation 
(possibly by calling the TypeExtractor again, passing the Map of generic 
bindings).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2897) Use distinct initial indices for OutputEmitter round-robin

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011578#comment-15011578
 ] 

ASF GitHub Bot commented on FLINK-2897:
---

Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1292#issuecomment-157807317
  
Yes, +1 for merging. Thanks for your contribution @greghogan.


> Use distinct initial indices for OutputEmitter round-robin
> --
>
> Key: FLINK-2897
> URL: https://issues.apache.org/jira/browse/FLINK-2897
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: 0.10.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>
> Currently, when performing a round-robin partitioning each task will 
> sequentially partition starting with partition "1". This is fine in the usual 
> case where the number of partitioned objects greatly exceeds the number of 
> channels. However, in the case where the number of objects is relatively few 
> (each, perhaps, requiring a large computation or access to an external 
> system) it would be much better to begin partitioning at distinct indices 
> (the task index).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3006) TypeExtractor fails on custom type

2015-11-18 Thread Gyula Fora (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011630#comment-15011630
 ] 

Gyula Fora commented on FLINK-3006:
---

This only occurs when the udf is a java 8 lambda method. Could you please try 
that?

> TypeExtractor fails on custom type
> --
>
> Key: FLINK-3006
> URL: https://issues.apache.org/jira/browse/FLINK-3006
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 0.10.0, 1.0.0
>Reporter: Gyula Fora
>
> I get a weird error when I try to execute my job on the cluster. Locally this 
> works fine but running it from the command line fails during typeextraction:
> input1.union(input2, input3).map(Either:: 
> Left).returns(eventOrLongType);
> The UserEvent type is a subclass of Tuple4 with 
> no extra fields. And the Either type is a regular pojo with 2 public nullable 
> fields and a a default constructor.
> This fails when trying to extract the output type from the mapper, but I 
> wouldnt actually care about that because I am providing my custom type 
> implementation for this Either type.
> The error:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error.
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:512)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:395)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:250)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:669)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:320)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:971)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1021)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:418)
>   at java.util.ArrayList.get(ArrayList.java:431)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoFromInputs(TypeExtractor.java:599)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:493)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1392)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1273)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:560)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:389)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:273)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getMapReturnTypes(TypeExtractor.java:110)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.map(DataStream.java:550)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2978][web-dashboard][webclient] Integra...

2015-11-18 Thread sachingoel0101
Github user sachingoel0101 commented on a diff in the pull request:

https://github.com/apache/flink/pull/1338#discussion_r45240771
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/HttpRequestHandler.java
 ---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor;
+

+/*
+ * This code is based on the "HttpUploadServerHandler" from the
+ * Netty project's HTTP server example.
+ *
+ * See http://netty.io and
+ * 
https://github.com/netty/netty/blob/netty-4.0.31.Final/example/src/main/java/io/netty/example/http/upload/HttpUploadServerHandler.java
+ 
*/
+
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.SimpleChannelInboundHandler;
+import io.netty.handler.codec.http.HttpContent;
+import io.netty.handler.codec.http.HttpHeaders;
+import io.netty.handler.codec.http.HttpMethod;
+import io.netty.handler.codec.http.HttpObject;
+import io.netty.handler.codec.http.HttpRequest;
+import io.netty.handler.codec.http.LastHttpContent;
+import io.netty.handler.codec.http.QueryStringDecoder;
+import io.netty.handler.codec.http.QueryStringEncoder;
+import io.netty.handler.codec.http.multipart.DefaultHttpDataFactory;
+import io.netty.handler.codec.http.multipart.DiskFileUpload;
+import io.netty.handler.codec.http.multipart.HttpDataFactory;
+import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
+import 
io.netty.handler.codec.http.multipart.HttpPostRequestDecoder.EndOfDataDecoderException;
+import io.netty.handler.codec.http.multipart.InterfaceHttpData;
+import 
io.netty.handler.codec.http.multipart.InterfaceHttpData.HttpDataType;
+
+import java.io.File;
+import java.util.UUID;
+
+/**
+ * Simple code which handles all HTTP requests from the user, and passes 
them to the Router
+ * handler directly if they do not involve file upload requests.
+ * If a file is required to be uploaded, it handles the upload, and in the 
http request to the
+ * next handler, passes the name of the file to the next handler.
+ */
+public class HttpRequestHandler extends 
SimpleChannelInboundHandler {
+
+   private HttpRequest request;
+
+   private boolean readingChunks;
+
+   private static final HttpDataFactory factory = new 
DefaultHttpDataFactory(true); // use disk
+
+   private String requestPath;
+
+   private HttpPostRequestDecoder decoder;
+
+   private final File uploadDir;
+
+   /**
+* The directory where files should be uploaded.
+*/
+   public HttpRequestHandler(File uploadDir) {
+   this.uploadDir = uploadDir;
+   }
+
+   @Override
+   public void channelUnregistered(ChannelHandlerContext ctx) throws 
Exception {
+   if (decoder != null) {
+   decoder.cleanFiles();
+   }
+   }
+
+   @Override
+   public void channelRead0(ChannelHandlerContext ctx, HttpObject msg) 
throws Exception {
+   if (msg instanceof HttpRequest) {
+   request = (HttpRequest) msg;
+   requestPath = new 
QueryStringDecoder(request.getUri()).path();
+   if (request.getMethod() != HttpMethod.POST) {
--- End diff --

I'm not very sure about the conventions, but only `PUT` and `POST` methods 
have a payload associated with them, since they're intuitively *unsafe* 
methods, which change some state on the server. 
Further, as for `DELETE`, the HTTP specification states that there is no 
defined semantics for associating bodies. 
https://tools.ietf.org/html/rfc7231#section-4.3.5
Since the netty server currently only processes GET, DELETE and POST, I 
think we can safely assume no other requests will arrive, and if they do, it'll 
anyway return a 404.
   

[jira] [Commented] (FLINK-3006) TypeExtractor fails on custom type

2015-11-18 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011682#comment-15011682
 ] 

Timo Walther commented on FLINK-3006:
-

The bottom "map" is a lambda method. I just used the top one for proper type 
information for the "returns". Could you try to change my example such that the 
error occurs again?

> TypeExtractor fails on custom type
> --
>
> Key: FLINK-3006
> URL: https://issues.apache.org/jira/browse/FLINK-3006
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 0.10.0, 1.0.0
>Reporter: Gyula Fora
>
> I get a weird error when I try to execute my job on the cluster. Locally this 
> works fine but running it from the command line fails during typeextraction:
> input1.union(input2, input3).map(Either:: 
> Left).returns(eventOrLongType);
> The UserEvent type is a subclass of Tuple4 with 
> no extra fields. And the Either type is a regular pojo with 2 public nullable 
> fields and a a default constructor.
> This fails when trying to extract the output type from the mapper, but I 
> wouldnt actually care about that because I am providing my custom type 
> implementation for this Either type.
> The error:
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error.
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:512)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:395)
>   at org.apache.flink.client.program.Client.runBlocking(Client.java:250)
>   at 
> org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:669)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:320)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:971)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1021)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:418)
>   at java.util.ArrayList.get(ArrayList.java:431)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoFromInputs(TypeExtractor.java:599)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:493)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.analyzePojo(TypeExtractor.java:1392)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateGetForClass(TypeExtractor.java:1273)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.createTypeInfoWithTypeHierarchy(TypeExtractor.java:560)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.privateCreateTypeInfo(TypeExtractor.java:389)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getUnaryOperatorReturnType(TypeExtractor.java:273)
>   at 
> org.apache.flink.api.java.typeutils.TypeExtractor.getMapReturnTypes(TypeExtractor.java:110)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.map(DataStream.java:550)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2989) Job Cancel button doesn't work on Yarn

2015-11-18 Thread Maximilian Michels (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maximilian Michels resolved FLINK-2989.
---
   Resolution: Fixed
Fix Version/s: 0.10.1
   1.0.0

master: 08318e1a7649b6f78b2c2d4fba7004c547d05961
release-0.10: a7e799b9f728d2ea9854fd971bb0967e052b9b11

> Job Cancel button doesn't work on Yarn
> --
>
> Key: FLINK-2989
> URL: https://issues.apache.org/jira/browse/FLINK-2989
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Reporter: Sachin Goel
>Assignee: Maximilian Michels
> Fix For: 1.0.0, 0.10.1
>
>
> The newly added Cancel button doesn't work on Yarn, when accessing via Yarn 
> Web Application proxy. It works fine when we're directly on the web monitor.
> The reason is Yarn doesn't allow DELETE requests to the AM yet. It should be 
> enabled in 2.8.0. [YARN-2031, YARN-2084.]
> A workaround for now can be to use a {{GET}} method instead of {{DELETE}}, 
> but that breaks the conventions of REST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-1945][py] Python Tests less verbose

2015-11-18 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/1376#discussion_r45240517
  
--- Diff: 
flink-libraries/flink-python/src/test/python/org/apache/flink/python/api/test_type_deduction.py
 ---
@@ -58,5 +58,5 @@
 
 msg.output()
--- End diff --

actually, given ow its currently run this could be removed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2111] Add "stop" signal to cleanly shut...

2015-11-18 Thread sachingoel0101
Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-157817216
  
Hi @mjsax , your change to coffeescript aren't ported to the actual 
javascript. It appears you forgot to run `gulp`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2488][FLINK-2496] Expose Task Manager c...

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157824468
  
Yes, a TaskInfo is a good idea.

It can be handed also to the `RuntimeEnvironment` (from there to the 
`RuntimeContext`), and is created once in the Task constructor from the TDD 
(that we we spare the JobManager from changes).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: FLINK-3041: Twitter Streaming Description sect...

2015-11-18 Thread smarthi
GitHub user smarthi opened a pull request:

https://github.com/apache/flink/pull/1379

FLINK-3041: Twitter Streaming Description section of Streaming Programming 
guide refers to an incorrect example 'TwitterLocal'



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/smarthi/flink master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/1379.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1379


commit a3ce1557080d716aa415615466cc64c0797a3ea5
Author: smarthi 
Date:   2015-11-18T17:48:38Z

FLINK-3041: Twitter Streaming Description section of Streaming Programming 
guide refers to an incorrect example 'TwitterLocal'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request:

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:


https://github.com/apache/flink/commit/08318e1a7649b6f78b2c2d4fba7004c547d05961#commitcomment-14478619
  
BTW: I think the usual style for git commit messages is to write what the 
commit does, not what the original problem was. So something like "Fix Cancel 
Button for YARN" would be better...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-2897] [runtime] Use distinct initial in...

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1292#issuecomment-157803640
  
Aside from till's comment (simplification of the "robin()" case), this 
looks good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2897) Use distinct initial indices for OutputEmitter round-robin

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011552#comment-15011552
 ] 

ASF GitHub Bot commented on FLINK-2897:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1292#issuecomment-157803640
  
Aside from till's comment (simplification of the "robin()" case), this 
looks good.


> Use distinct initial indices for OutputEmitter round-robin
> --
>
> Key: FLINK-2897
> URL: https://issues.apache.org/jira/browse/FLINK-2897
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime
>Affects Versions: 0.10.0
>Reporter: Greg Hogan
>Assignee: Greg Hogan
>
> Currently, when performing a round-robin partitioning each task will 
> sequentially partition starting with partition "1". This is fine in the usual 
> case where the number of partitioned objects greatly exceeds the number of 
> channels. However, in the case where the number of objects is relatively few 
> (each, perhaps, requiring a large computation or access to an external 
> system) it would be much better to begin partitioning at distinct indices 
> (the task index).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: Fixed FLINK-2942

2015-11-18 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1346


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2942) Dangling operators in web UI's program visualization (non-deterministic)

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011572#comment-15011572
 ] 

ASF GitHub Bot commented on FLINK-2942:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/1346


> Dangling operators in web UI's program visualization (non-deterministic)
> 
>
> Key: FLINK-2942
> URL: https://issues.apache.org/jira/browse/FLINK-2942
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 0.10.0
> Environment: OSX, Firefox and Chrome
>Reporter: Fabian Hueske
>Priority: Critical
> Fix For: 1.0.0
>
> Attachments: Screen Shot 2015-10-29 at 17.11.19.png, Screen Shot 
> 2015-10-29 at 20.51.46.png, Screen Shot 2015-10-29 at 20.52.13.png, Screen 
> Shot 2015-11-09 at 14.48.03.png
>
>
> When visualizing a program with three {{MapPartition}} operators that branch 
> off from an {{OuterJoin}} operator, two of the three {{MapPartition}} 
> operators are not connected to the {{OuterJoin}} operator and appear to have 
> no input.
> The problem is present in FireFox as well as in Chrome. I'll attach a 
> screenshot.
> The problem and be reproduced by executing the "Cascading for the impatient" 
> [TFIDF example 
> program|https://github.com/Cascading/Impatient/tree/master/part5] using the 
> [Cascading Flink Connector|https://github.com/dataArtisans/cascading-flink].
> Update: It appears that the problem is non-deterministic. I ran the same job 
> again (same setup) and the previously missing connections were visualized. 
> However, the UI showed only one input for a binary operator (OuterJoin). 
> Running the job a third time resulted in a graph layout which was again 
> different from both runs before. However, two of the {{MapPartition}} 
> operators had not inputs just as in the first run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2978) Integrate web submission interface into the new dashboard

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011712#comment-15011712
 ] 

ASF GitHub Bot commented on FLINK-2978:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1338#discussion_r45244344
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/HttpRequestHandler.java
 ---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor;
+

+/*
+ * This code is based on the "HttpUploadServerHandler" from the
+ * Netty project's HTTP server example.
+ *
+ * See http://netty.io and
+ * 
https://github.com/netty/netty/blob/netty-4.0.31.Final/example/src/main/java/io/netty/example/http/upload/HttpUploadServerHandler.java
+ 
*/
+
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.SimpleChannelInboundHandler;
+import io.netty.handler.codec.http.HttpContent;
+import io.netty.handler.codec.http.HttpHeaders;
+import io.netty.handler.codec.http.HttpMethod;
+import io.netty.handler.codec.http.HttpObject;
+import io.netty.handler.codec.http.HttpRequest;
+import io.netty.handler.codec.http.LastHttpContent;
+import io.netty.handler.codec.http.QueryStringDecoder;
+import io.netty.handler.codec.http.QueryStringEncoder;
+import io.netty.handler.codec.http.multipart.DefaultHttpDataFactory;
+import io.netty.handler.codec.http.multipart.DiskFileUpload;
+import io.netty.handler.codec.http.multipart.HttpDataFactory;
+import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
+import 
io.netty.handler.codec.http.multipart.HttpPostRequestDecoder.EndOfDataDecoderException;
+import io.netty.handler.codec.http.multipart.InterfaceHttpData;
+import 
io.netty.handler.codec.http.multipart.InterfaceHttpData.HttpDataType;
+
+import java.io.File;
+import java.util.UUID;
+
+/**
+ * Simple code which handles all HTTP requests from the user, and passes 
them to the Router
+ * handler directly if they do not involve file upload requests.
+ * If a file is required to be uploaded, it handles the upload, and in the 
http request to the
+ * next handler, passes the name of the file to the next handler.
+ */
+public class HttpRequestHandler extends 
SimpleChannelInboundHandler {
+
+   private HttpRequest request;
+
+   private boolean readingChunks;
+
+   private static final HttpDataFactory factory = new 
DefaultHttpDataFactory(true); // use disk
+
+   private String requestPath;
+
+   private HttpPostRequestDecoder decoder;
+
+   private final File uploadDir;
+
+   /**
+* The directory where files should be uploaded.
+*/
+   public HttpRequestHandler(File uploadDir) {
+   this.uploadDir = uploadDir;
+   }
+
+   @Override
+   public void channelUnregistered(ChannelHandlerContext ctx) throws 
Exception {
+   if (decoder != null) {
+   decoder.cleanFiles();
+   }
+   }
+
+   @Override
+   public void channelRead0(ChannelHandlerContext ctx, HttpObject msg) 
throws Exception {
+   if (msg instanceof HttpRequest) {
+   request = (HttpRequest) msg;
+   requestPath = new 
QueryStringDecoder(request.getUri()).path();
+   if (request.getMethod() != HttpMethod.POST) {
--- End diff --

Okay, thanks for clarifying. 

According to this 
(http://restful-api-design.readthedocs.org/en/latest/methods.html), upload is 
POST and update is PUT...


> Integrate web submission interface into the new dashboard
> -
>
> Key: 

[jira] [Closed] (FLINK-2942) Dangling operators in web UI's program visualization (non-deterministic)

2015-11-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-2942.
---

> Dangling operators in web UI's program visualization (non-deterministic)
> 
>
> Key: FLINK-2942
> URL: https://issues.apache.org/jira/browse/FLINK-2942
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 0.10.0
> Environment: OSX, Firefox and Chrome
>Reporter: Fabian Hueske
>Assignee: Piotr Godek
>Priority: Critical
> Fix For: 1.0.0, 0.10.1
>
> Attachments: Screen Shot 2015-10-29 at 17.11.19.png, Screen Shot 
> 2015-10-29 at 20.51.46.png, Screen Shot 2015-10-29 at 20.52.13.png, Screen 
> Shot 2015-11-09 at 14.48.03.png
>
>
> When visualizing a program with three {{MapPartition}} operators that branch 
> off from an {{OuterJoin}} operator, two of the three {{MapPartition}} 
> operators are not connected to the {{OuterJoin}} operator and appear to have 
> no input.
> The problem is present in FireFox as well as in Chrome. I'll attach a 
> screenshot.
> The problem and be reproduced by executing the "Cascading for the impatient" 
> [TFIDF example 
> program|https://github.com/Cascading/Impatient/tree/master/part5] using the 
> [Cascading Flink Connector|https://github.com/dataArtisans/cascading-flink].
> Update: It appears that the problem is non-deterministic. I ran the same job 
> again (same setup) and the previously missing connections were visualized. 
> However, the UI showed only one input for a binary operator (OuterJoin). 
> Running the job a third time resulted in a graph layout which was again 
> different from both runs before. However, two of the {{MapPartition}} 
> operators had not inputs just as in the first run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (FLINK-2942) Dangling operators in web UI's program visualization (non-deterministic)

2015-11-18 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-2942.
-
   Resolution: Fixed
 Assignee: Piotr Godek
Fix Version/s: 0.10.1

Fixed in
  - 0.10.1 via 9895e3e43823be11f058a00b9c4eb0c049dd91a8
  - 1.0.0 via fc6fec78685ea4add6dec15b72e08d5daac5bea4

> Dangling operators in web UI's program visualization (non-deterministic)
> 
>
> Key: FLINK-2942
> URL: https://issues.apache.org/jira/browse/FLINK-2942
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend
>Affects Versions: 0.10.0
> Environment: OSX, Firefox and Chrome
>Reporter: Fabian Hueske
>Assignee: Piotr Godek
>Priority: Critical
> Fix For: 1.0.0, 0.10.1
>
> Attachments: Screen Shot 2015-10-29 at 17.11.19.png, Screen Shot 
> 2015-10-29 at 20.51.46.png, Screen Shot 2015-10-29 at 20.52.13.png, Screen 
> Shot 2015-11-09 at 14.48.03.png
>
>
> When visualizing a program with three {{MapPartition}} operators that branch 
> off from an {{OuterJoin}} operator, two of the three {{MapPartition}} 
> operators are not connected to the {{OuterJoin}} operator and appear to have 
> no input.
> The problem is present in FireFox as well as in Chrome. I'll attach a 
> screenshot.
> The problem and be reproduced by executing the "Cascading for the impatient" 
> [TFIDF example 
> program|https://github.com/Cascading/Impatient/tree/master/part5] using the 
> [Cascading Flink Connector|https://github.com/dataArtisans/cascading-flink].
> Update: It appears that the problem is non-deterministic. I ran the same job 
> again (same setup) and the previously missing connections were visualized. 
> However, the UI showed only one input for a binary operator (OuterJoin). 
> Running the job a third time resulted in a graph layout which was again 
> different from both runs before. However, two of the {{MapPartition}} 
> operators had not inputs just as in the first run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2897] [runtime] Use distinct initial in...

2015-11-18 Thread tillrohrmann
Github user tillrohrmann commented on the pull request:

https://github.com/apache/flink/pull/1292#issuecomment-157807317
  
Yes, +1 for merging. Thanks for your contribution @greghogan.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2488) Expose attemptNumber in RuntimeContext

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011652#comment-15011652
 ] 

ASF GitHub Bot commented on FLINK-2488:
---

Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157818292
  
Yes. That's why I closed it. :')
I'm thinking of having a construct named `TaskInfo`, which contains 
information like name, index, parallel tasks, attempt number, etc. which will 
be passed all the way down from the `TDD` to the `RuntimeContext`. Let me know 
if that's a good idea.


> Expose attemptNumber in RuntimeContext
> --
>
> Key: FLINK-2488
> URL: https://issues.apache.org/jira/browse/FLINK-2488
> Project: Flink
>  Issue Type: Improvement
>  Components: JobManager, TaskManager
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
>Priority: Minor
>
> It would be nice to expose the attemptNumber of a task in the 
> {{RuntimeContext}}. 
> This would allow user code to behave differently in restart scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2488][FLINK-2496] Expose Task Manager c...

2015-11-18 Thread sachingoel0101
Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157818292
  
Yes. That's why I closed it. :')
I'm thinking of having a construct named `TaskInfo`, which contains 
information like name, index, parallel tasks, attempt number, etc. which will 
be passed all the way down from the `TDD` to the `RuntimeContext`. Let me know 
if that's a good idea.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2488) Expose attemptNumber in RuntimeContext

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011704#comment-15011704
 ] 

ASF GitHub Bot commented on FLINK-2488:
---

Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157824468
  
Yes, a TaskInfo is a good idea.

It can be handed also to the `RuntimeEnvironment` (from there to the 
`RuntimeContext`), and is created once in the Task constructor from the TDD 
(that we we spare the JobManager from changes).


> Expose attemptNumber in RuntimeContext
> --
>
> Key: FLINK-2488
> URL: https://issues.apache.org/jira/browse/FLINK-2488
> Project: Flink
>  Issue Type: Improvement
>  Components: JobManager, TaskManager
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
>Priority: Minor
>
> It would be nice to expose the attemptNumber of a task in the 
> {{RuntimeContext}}. 
> This would allow user code to behave differently in restart scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2978][web-dashboard][webclient] Integra...

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1338#discussion_r45244344
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/HttpRequestHandler.java
 ---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor;
+

+/*
+ * This code is based on the "HttpUploadServerHandler" from the
+ * Netty project's HTTP server example.
+ *
+ * See http://netty.io and
+ * 
https://github.com/netty/netty/blob/netty-4.0.31.Final/example/src/main/java/io/netty/example/http/upload/HttpUploadServerHandler.java
+ 
*/
+
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.SimpleChannelInboundHandler;
+import io.netty.handler.codec.http.HttpContent;
+import io.netty.handler.codec.http.HttpHeaders;
+import io.netty.handler.codec.http.HttpMethod;
+import io.netty.handler.codec.http.HttpObject;
+import io.netty.handler.codec.http.HttpRequest;
+import io.netty.handler.codec.http.LastHttpContent;
+import io.netty.handler.codec.http.QueryStringDecoder;
+import io.netty.handler.codec.http.QueryStringEncoder;
+import io.netty.handler.codec.http.multipart.DefaultHttpDataFactory;
+import io.netty.handler.codec.http.multipart.DiskFileUpload;
+import io.netty.handler.codec.http.multipart.HttpDataFactory;
+import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
+import 
io.netty.handler.codec.http.multipart.HttpPostRequestDecoder.EndOfDataDecoderException;
+import io.netty.handler.codec.http.multipart.InterfaceHttpData;
+import 
io.netty.handler.codec.http.multipart.InterfaceHttpData.HttpDataType;
+
+import java.io.File;
+import java.util.UUID;
+
+/**
+ * Simple code which handles all HTTP requests from the user, and passes 
them to the Router
+ * handler directly if they do not involve file upload requests.
+ * If a file is required to be uploaded, it handles the upload, and in the 
http request to the
+ * next handler, passes the name of the file to the next handler.
+ */
+public class HttpRequestHandler extends 
SimpleChannelInboundHandler {
+
+   private HttpRequest request;
+
+   private boolean readingChunks;
+
+   private static final HttpDataFactory factory = new 
DefaultHttpDataFactory(true); // use disk
+
+   private String requestPath;
+
+   private HttpPostRequestDecoder decoder;
+
+   private final File uploadDir;
+
+   /**
+* The directory where files should be uploaded.
+*/
+   public HttpRequestHandler(File uploadDir) {
+   this.uploadDir = uploadDir;
+   }
+
+   @Override
+   public void channelUnregistered(ChannelHandlerContext ctx) throws 
Exception {
+   if (decoder != null) {
+   decoder.cleanFiles();
+   }
+   }
+
+   @Override
+   public void channelRead0(ChannelHandlerContext ctx, HttpObject msg) 
throws Exception {
+   if (msg instanceof HttpRequest) {
+   request = (HttpRequest) msg;
+   requestPath = new 
QueryStringDecoder(request.getUri()).path();
+   if (request.getMethod() != HttpMethod.POST) {
--- End diff --

Okay, thanks for clarifying. 

According to this 
(http://restful-api-design.readthedocs.org/en/latest/methods.html), upload is 
POST and update is PUT...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2488) Expose attemptNumber in RuntimeContext

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011720#comment-15011720
 ] 

ASF GitHub Bot commented on FLINK-2488:
---

Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157826454
  
Agreed. 
I also would like to cleanup the several getter functions for these fields. 
We can just add a `getTaskInfo` in the `TDD`, `Task` and `Environment`. The 
fields can then be accessed from this object.
Since this isn't the user-facing API, it should be fine IMO. 
`RuntimeContext` will still provide separate access to every field.


> Expose attemptNumber in RuntimeContext
> --
>
> Key: FLINK-2488
> URL: https://issues.apache.org/jira/browse/FLINK-2488
> Project: Flink
>  Issue Type: Improvement
>  Components: JobManager, TaskManager
>Affects Versions: 0.10.0
>Reporter: Robert Metzger
>Assignee: Sachin Goel
>Priority: Minor
>
> It would be nice to expose the attemptNumber of a task in the 
> {{RuntimeContext}}. 
> This would allow user code to behave differently in restart scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2488][FLINK-2496] Expose Task Manager c...

2015-11-18 Thread sachingoel0101
Github user sachingoel0101 commented on the pull request:

https://github.com/apache/flink/pull/1026#issuecomment-157826454
  
Agreed. 
I also would like to cleanup the several getter functions for these fields. 
We can just add a `getTaskInfo` in the `TDD`, `Task` and `Environment`. The 
fields can then be accessed from this object.
Since this isn't the user-facing API, it should be fine IMO. 
`RuntimeContext` will still provide separate access to every field.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3002] Add Either type to the Java API

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45248963
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/runtime/EitherSerializer.java
 ---
@@ -0,0 +1,194 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils.runtime;
+
+import java.io.IOException;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.typeutils.Either;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataOutputView;
+
+/**
+ * A {@link TypeSerializer} for the {@ link Either} type of the Java class.
+ *
+ * @param  the Left value type
+ * @param  the Right value type
+ */
+public class EitherSerializer extends TypeSerializer> {
+
+   private static final long serialVersionUID = 1L;
+
+   private final Class> typeClass;
--- End diff --

You can probably drop this class as well, I don't see it needed anywhere.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3002) Add an EitherType to the Java API

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011782#comment-15011782
 ] 

ASF GitHub Bot commented on FLINK-3002:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45248963
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/runtime/EitherSerializer.java
 ---
@@ -0,0 +1,194 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils.runtime;
+
+import java.io.IOException;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.typeutils.Either;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataOutputView;
+
+/**
+ * A {@link TypeSerializer} for the {@ link Either} type of the Java class.
+ *
+ * @param  the Left value type
+ * @param  the Right value type
+ */
+public class EitherSerializer extends TypeSerializer> {
+
+   private static final long serialVersionUID = 1L;
+
+   private final Class> typeClass;
--- End diff --

You can probably drop this class as well, I don't see it needed anywhere.


> Add an EitherType to the Java API
> -
>
> Key: FLINK-3002
> URL: https://issues.apache.org/jira/browse/FLINK-3002
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Vasia Kalavri
> Fix For: 1.0.0
>
>
> Either types are recurring patterns and should be serialized efficiently, so 
> it makes sense to add them to the core Java API.
> Since Java does not have such a type as of Java 8, we would need to add our 
> own version.
> The Scala API handles the Scala Either Type already efficiently. I would not 
> use the Scala Either Type in the Java API, since we are trying to get the 
> {{flink-java}} project "Scala free" for people that don't use Scala and o not 
> want to worry about Scala version matches and mismatches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3002) Add an EitherType to the Java API

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011787#comment-15011787
 ] 

ASF GitHub Bot commented on FLINK-3002:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45249202
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/Either.java ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+/**
+ * This type represents a value of one two possible types, Left or Right
+ * (a disjoint union), inspired by Scala's Either type.
+ *
+ * @param  the type of Left
+ * @param  the type of Right
+ */
+public abstract class Either {
+
+   /**
+* Create a Left value of Either
+*/
+   public static  Either left(L value) {
+   return new Left(value);
+   }
+
+   /**
+* Create a Right value of Either
+*/
+   public static  Either right(R value) {
+   return new Right(value);
+   }
+
+   /**
+* Retrieve the Left value of Either.
+* @return the Left value
+* @throws IllegalStateException if called on a Right
+*/
+   public abstract L left() throws IllegalStateException;
+
+   /**
+* Retrieve the Right value of Either.
+* @return the Right value
+* @throws IllegalStateException if called on a Left
+*/
+   public abstract R right() throws IllegalStateException;
+
+   /**
+* 
+* @return true if this is a Left value, false if this is a Right value
+*/
+   public final boolean isLeft() {
+   return getClass() == Left.class;
+   }
+
+   /**
+* 
+* @return true if this is a Right value, false if this is a Left value
+*/
+   public final boolean isRight() {
+   return getClass() == Right.class;
+   }
+
+   private static class Left extends Either {
+   final L value;
--- End diff --

Field can be private.


> Add an EitherType to the Java API
> -
>
> Key: FLINK-3002
> URL: https://issues.apache.org/jira/browse/FLINK-3002
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Vasia Kalavri
> Fix For: 1.0.0
>
>
> Either types are recurring patterns and should be serialized efficiently, so 
> it makes sense to add them to the core Java API.
> Since Java does not have such a type as of Java 8, we would need to add our 
> own version.
> The Scala API handles the Scala Either Type already efficiently. I would not 
> use the Scala Either Type in the Java API, since we are trying to get the 
> {{flink-java}} project "Scala free" for people that don't use Scala and o not 
> want to worry about Scala version matches and mismatches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3002) Add an EitherType to the Java API

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011810#comment-15011810
 ] 

ASF GitHub Bot commented on FLINK-3002:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45250469
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/Either.java ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+/**
+ * This type represents a value of one two possible types, Left or Right
+ * (a disjoint union), inspired by Scala's Either type.
+ *
+ * @param  the type of Left
+ * @param  the type of Right
+ */
+public abstract class Either {
+
+   /**
+* Create a Left value of Either
+*/
+   public static  Either left(L value) {
--- End diff --

hmm I was thinking that `Left` and `Right` make no sense on their own, 
that's why I have declared the classes private, i.e. I don't see anyone writing 
`Left left = Either.left(...)`. And it would be awkward to have to define 
2 types, no?


> Add an EitherType to the Java API
> -
>
> Key: FLINK-3002
> URL: https://issues.apache.org/jira/browse/FLINK-3002
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Vasia Kalavri
> Fix For: 1.0.0
>
>
> Either types are recurring patterns and should be serialized efficiently, so 
> it makes sense to add them to the core Java API.
> Since Java does not have such a type as of Java 8, we would need to add our 
> own version.
> The Scala API handles the Scala Either Type already efficiently. I would not 
> use the Scala Either Type in the Java API, since we are trying to get the 
> {{flink-java}} project "Scala free" for people that don't use Scala and o not 
> want to worry about Scala version matches and mismatches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-3002) Add an EitherType to the Java API

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011815#comment-15011815
 ] 

ASF GitHub Bot commented on FLINK-3002:
---

Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45250671
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/runtime/EitherSerializer.java
 ---
@@ -0,0 +1,194 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils.runtime;
+
+import java.io.IOException;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.typeutils.Either;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataOutputView;
+
+/**
+ * A {@link TypeSerializer} for the {@ link Either} type of the Java class.
+ *
+ * @param  the Left value type
+ * @param  the Right value type
+ */
+public class EitherSerializer extends TypeSerializer> {
+
+   private static final long serialVersionUID = 1L;
+
+   private final Class> typeClass;
--- End diff --

Right, thanks!


> Add an EitherType to the Java API
> -
>
> Key: FLINK-3002
> URL: https://issues.apache.org/jira/browse/FLINK-3002
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Vasia Kalavri
> Fix For: 1.0.0
>
>
> Either types are recurring patterns and should be serialized efficiently, so 
> it makes sense to add them to the core Java API.
> Since Java does not have such a type as of Java 8, we would need to add our 
> own version.
> The Scala API handles the Scala Either Type already efficiently. I would not 
> use the Scala Either Type in the Java API, since we are trying to get the 
> {{flink-java}} project "Scala free" for people that don't use Scala and o not 
> want to worry about Scala version matches and mismatches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3002] Add Either type to the Java API

2015-11-18 Thread vasia
Github user vasia commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45250671
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/runtime/EitherSerializer.java
 ---
@@ -0,0 +1,194 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils.runtime;
+
+import java.io.IOException;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.typeutils.Either;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataOutputView;
+
+/**
+ * A {@link TypeSerializer} for the {@ link Either} type of the Java class.
+ *
+ * @param  the Left value type
+ * @param  the Right value type
+ */
+public class EitherSerializer extends TypeSerializer> {
+
+   private static final long serialVersionUID = 1L;
+
+   private final Class> typeClass;
--- End diff --

Right, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3002] Add Either type to the Java API

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45251332
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/Either.java ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+/**
+ * This type represents a value of one two possible types, Left or Right
+ * (a disjoint union), inspired by Scala's Either type.
+ *
+ * @param  the type of Left
+ * @param  the type of Right
+ */
+public abstract class Either {
+
+   /**
+* Create a Left value of Either
+*/
+   public static  Either left(L value) {
--- End diff --

Depends, on whether you expect people to declare directly Left and Right 
variables. I was simply thinking, why forbid this, maybe someone finds a good 
case to do that...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3002) Add an EitherType to the Java API

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011838#comment-15011838
 ] 

ASF GitHub Bot commented on FLINK-3002:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45251332
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/Either.java ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+/**
+ * This type represents a value of one two possible types, Left or Right
+ * (a disjoint union), inspired by Scala's Either type.
+ *
+ * @param  the type of Left
+ * @param  the type of Right
+ */
+public abstract class Either {
+
+   /**
+* Create a Left value of Either
+*/
+   public static  Either left(L value) {
--- End diff --

Depends, on whether you expect people to declare directly Left and Right 
variables. I was simply thinking, why forbid this, maybe someone finds a good 
case to do that...


> Add an EitherType to the Java API
> -
>
> Key: FLINK-3002
> URL: https://issues.apache.org/jira/browse/FLINK-3002
> Project: Flink
>  Issue Type: New Feature
>  Components: Java API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Vasia Kalavri
> Fix For: 1.0.0
>
>
> Either types are recurring patterns and should be serialized efficiently, so 
> it makes sense to add them to the core Java API.
> Since Java does not have such a type as of Java 8, we would need to add our 
> own version.
> The Scala API handles the Scala Either Type already efficiently. I would not 
> use the Scala Either Type in the Java API, since we are trying to get the 
> {{flink-java}} project "Scala free" for people that don't use Scala and o not 
> want to worry about Scala version matches and mismatches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3002] Add Either type to the Java API

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45251503
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/Either.java ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+/**
+ * This type represents a value of one two possible types, Left or Right
+ * (a disjoint union), inspired by Scala's Either type.
+ *
+ * @param  the type of Left
+ * @param  the type of Right
+ */
+public abstract class Either {
+
+   /**
+* Create a Left value of Either
+*/
+   public static  Either left(L value) {
--- End diff --

But yeah, the Left and Right classes would need to be public then. Feel 
free to ignore this, up to you ;-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2978) Integrate web submission interface into the new dashboard

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011725#comment-15011725
 ] 

ASF GitHub Bot commented on FLINK-2978:
---

Github user sachingoel0101 commented on a diff in the pull request:

https://github.com/apache/flink/pull/1338#discussion_r45245432
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/HttpRequestHandler.java
 ---
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor;
+

+/*
+ * This code is based on the "HttpUploadServerHandler" from the
+ * Netty project's HTTP server example.
+ *
+ * See http://netty.io and
+ * 
https://github.com/netty/netty/blob/netty-4.0.31.Final/example/src/main/java/io/netty/example/http/upload/HttpUploadServerHandler.java
+ 
*/
+
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.SimpleChannelInboundHandler;
+import io.netty.handler.codec.http.HttpContent;
+import io.netty.handler.codec.http.HttpHeaders;
+import io.netty.handler.codec.http.HttpMethod;
+import io.netty.handler.codec.http.HttpObject;
+import io.netty.handler.codec.http.HttpRequest;
+import io.netty.handler.codec.http.LastHttpContent;
+import io.netty.handler.codec.http.QueryStringDecoder;
+import io.netty.handler.codec.http.QueryStringEncoder;
+import io.netty.handler.codec.http.multipart.DefaultHttpDataFactory;
+import io.netty.handler.codec.http.multipart.DiskFileUpload;
+import io.netty.handler.codec.http.multipart.HttpDataFactory;
+import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
+import 
io.netty.handler.codec.http.multipart.HttpPostRequestDecoder.EndOfDataDecoderException;
+import io.netty.handler.codec.http.multipart.InterfaceHttpData;
+import 
io.netty.handler.codec.http.multipart.InterfaceHttpData.HttpDataType;
+
+import java.io.File;
+import java.util.UUID;
+
+/**
+ * Simple code which handles all HTTP requests from the user, and passes 
them to the Router
+ * handler directly if they do not involve file upload requests.
+ * If a file is required to be uploaded, it handles the upload, and in the 
http request to the
+ * next handler, passes the name of the file to the next handler.
+ */
+public class HttpRequestHandler extends 
SimpleChannelInboundHandler {
+
+   private HttpRequest request;
+
+   private boolean readingChunks;
+
+   private static final HttpDataFactory factory = new 
DefaultHttpDataFactory(true); // use disk
+
+   private String requestPath;
+
+   private HttpPostRequestDecoder decoder;
+
+   private final File uploadDir;
+
+   /**
+* The directory where files should be uploaded.
+*/
+   public HttpRequestHandler(File uploadDir) {
+   this.uploadDir = uploadDir;
+   }
+
+   @Override
+   public void channelUnregistered(ChannelHandlerContext ctx) throws 
Exception {
+   if (decoder != null) {
+   decoder.cleanFiles();
+   }
+   }
+
+   @Override
+   public void channelRead0(ChannelHandlerContext ctx, HttpObject msg) 
throws Exception {
+   if (msg instanceof HttpRequest) {
+   request = (HttpRequest) msg;
+   requestPath = new 
QueryStringDecoder(request.getUri()).path();
+   if (request.getMethod() != HttpMethod.POST) {
--- End diff --

Ah yes. It appears I made a mistake. You're right. `PUT` modifies the state 
of an existing resource. Just had a more careful look at the rfc too.


> Integrate web submission interface into the new dashboard
> -
>
> Key: FLINK-2978
>  

[GitHub] flink pull request: [FLINK-2111] Add "stop" signal to cleanly shut...

2015-11-18 Thread mjsax
Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-157832022
  
Thanks. Great. It works now. I was not aware the I have to build the UI 
manually. Would it be possible to integrate it into maven build? At least we 
should provide a README how to build it (or better a script to run). What do 
you think about it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2111) Add "stop" signal to cleanly shutdown streaming jobs

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011750#comment-15011750
 ] 

ASF GitHub Bot commented on FLINK-2111:
---

Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-157832022
  
Thanks. Great. It works now. I was not aware the I have to build the UI 
manually. Would it be possible to integrate it into maven build? At least we 
should provide a README how to build it (or better a script to run). What do 
you think about it?


> Add "stop" signal to cleanly shutdown streaming jobs
> 
>
> Key: FLINK-2111
> URL: https://issues.apache.org/jira/browse/FLINK-2111
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime, JobManager, Local Runtime, 
> Streaming, TaskManager, Webfrontend
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>
> Currently, streaming jobs can only be stopped using "cancel" command, what is 
> a "hard" stop with no clean shutdown.
> The new introduced "stop" signal, will only affect streaming source tasks 
> such that the sources can stop emitting data and shutdown cleanly, resulting 
> in a clean shutdown of the whole streaming job.
> This feature is a pre-requirment for 
> https://issues.apache.org/jira/browse/FLINK-1929



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-2111) Add "stop" signal to cleanly shutdown streaming jobs

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011761#comment-15011761
 ] 

ASF GitHub Bot commented on FLINK-2111:
---

Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-157833952
  
There is a readme: 
https://github.com/apache/flink/tree/master/flink-runtime-web


> Add "stop" signal to cleanly shutdown streaming jobs
> 
>
> Key: FLINK-2111
> URL: https://issues.apache.org/jira/browse/FLINK-2111
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime, JobManager, Local Runtime, 
> Streaming, TaskManager, Webfrontend
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>
> Currently, streaming jobs can only be stopped using "cancel" command, what is 
> a "hard" stop with no clean shutdown.
> The new introduced "stop" signal, will only affect streaming source tasks 
> such that the sources can stop emitting data and shutdown cleanly, resulting 
> in a clean shutdown of the whole streaming job.
> This feature is a pre-requirment for 
> https://issues.apache.org/jira/browse/FLINK-1929



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2111] Add "stop" signal to cleanly shut...

2015-11-18 Thread rmetzger
Github user rmetzger commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-157833952
  
There is a readme: 
https://github.com/apache/flink/tree/master/flink-runtime-web


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-2111) Add "stop" signal to cleanly shutdown streaming jobs

2015-11-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011769#comment-15011769
 ] 

ASF GitHub Bot commented on FLINK-2111:
---

Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-157834943
  
How could I miss this...?? Sorry. My own fault...


> Add "stop" signal to cleanly shutdown streaming jobs
> 
>
> Key: FLINK-2111
> URL: https://issues.apache.org/jira/browse/FLINK-2111
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Runtime, JobManager, Local Runtime, 
> Streaming, TaskManager, Webfrontend
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>
> Currently, streaming jobs can only be stopped using "cancel" command, what is 
> a "hard" stop with no clean shutdown.
> The new introduced "stop" signal, will only affect streaming source tasks 
> such that the sources can stop emitting data and shutdown cleanly, resulting 
> in a clean shutdown of the whole streaming job.
> This feature is a pre-requirment for 
> https://issues.apache.org/jira/browse/FLINK-1929



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-2111] Add "stop" signal to cleanly shut...

2015-11-18 Thread mjsax
Github user mjsax commented on the pull request:

https://github.com/apache/flink/pull/750#issuecomment-157834943
  
How could I miss this...?? Sorry. My own fault...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-3043) Kafka Connector description in Streaming API guide is wrong/outdated

2015-11-18 Thread Robert Metzger (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011772#comment-15011772
 ] 

Robert Metzger commented on FLINK-3043:
---

I'm addressing some of the issues here: 
https://github.com/apache/flink/pull/1341

> Kafka Connector description in Streaming API guide is wrong/outdated
> 
>
> Key: FLINK-3043
> URL: https://issues.apache.org/jira/browse/FLINK-3043
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
> Fix For: 1.0.0, 0.10.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] flink pull request: [FLINK-3002] Add Either type to the Java API

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/1371#discussion_r45248610
  
--- Diff: 
flink-java/src/main/java/org/apache/flink/api/java/typeutils/Either.java ---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.api.java.typeutils;
+
+/**
+ * This type represents a value of one two possible types, Left or Right
+ * (a disjoint union), inspired by Scala's Either type.
+ *
+ * @param  the type of Left
+ * @param  the type of Right
+ */
+public abstract class Either {
+
+   /**
+* Create a Left value of Either
+*/
+   public static  Either left(L value) {
--- End diff --

I think it is better to return the specific type here `public static  
Left left(L value)`. More specific types allow people to assign them to 
more specific variables.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request: [FLINK-3002] Add Either type to the Java API

2015-11-18 Thread StephanEwen
Github user StephanEwen commented on the pull request:

https://github.com/apache/flink/pull/1371#issuecomment-157838535
  
Some small comments, otherwise +1 to merge.

Integration into the TypeExctractor can be a separate issue, in my opinion.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   >