[jira] [Updated] (FLINK-5826) UDF/UDTF should support variable types and variable arguments

2017-02-16 Thread Zhuoluo Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhuoluo Yang updated FLINK-5826:

Description: 
In some cases, UDF/UDTF should support variable types and variable arguments. 
Many UDF/UDTF developers wish to make the # of arguments and types flexible to 
users. They try to make their functions flexible.

Thus, we should support the following styles of UDF/UDTFs.

for example 1, in Java
{code:java}
public class SimpleUDF extends ScalarFunction {
public int eval(Object... args) {
// do something
}
}
{code}

for example 2, in Scala
{code}
class SimpleUDF extends ScalarFunction {
  def eval(args: Any*): Int = {
// do something
  }
}
{code}

If we modify the code in UserDefinedFunctionUtils.getSignature() and make both 
signatures pass. The first example will work normally. However, the second 
example will raise an exception.

{noformat}
Caused by: org.codehaus.commons.compiler.CompileException: Line 58, Column 0: 
No applicable constructor/method found for actual parameters 
"java.lang.String"; candidates are: "public java.lang.Object 
test.SimpleUDF.eval(scala.collection.Seq)"
  at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11523) 
~[janino-3.0.6.jar:?]
  at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8679)
 ~[janino-3.0.6.jar:?]
  at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8539) 
~[janino-3.0.6.jar:?]
{noformat} 

The reason is that Scala will do a *sugary* modification to the signature of 
the method. The mothod {code} def eval(args: Any*){code} will become {code}def 
eval(args: scala.collection.Seq){code} in the class file. 

The code generation has been done in Java. If we use java style 
{code}eval(Object... args){code} to call the Scala method, it will raise the 
above exception.

However, I can't always restrict users to use Java to write a UDF/UDTF. Any 
ideas in variable types and variable arguments of Scala UDF/UDTFs to prevent 
the compilation failure?

  was:
In some cases, UDF/UDTF should support variable types and variable arguments. 
Many UDF/UDTF developers wish to make the # of arguments and types flexible to 
users. They try to make their functions flexible.

Thus, we should support the following styles of UDF/UDTFs.

for example 1, in Java
{code:java}
public class SimpleUDF extends ScalarFunction {
public int eval(Object... args) {
// do something
}
}
{code}

for example 2, in Scala
{code}
class SimpleUDF extends ScalarFunction {
  def eval(args: Any*): Int = {
// do something
  }
}
{code}

If we modify the code in UserDefinedFunctionUtils.getSignature() and make both 
signatures pass. The first example will work normally. However, the second 
example will raise an exception.

{noformat}
Caused by: org.codehaus.commons.compiler.CompileException: Line 58, Column 0: 
No applicable constructor/method found for actual parameters 
"java.lang.String"; candidates are: "public java.lang.Object 
test.SimpleUDF.eval(scala.collection.Seq)"
  at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11523) 
~[janino-3.0.6.jar:?]
  at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8679)
 ~[janino-3.0.6.jar:?]
  at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8539) 
~[janino-3.0.6.jar:?]
{noformat} 

The reason is that Scala will do a sugary modification to the signature of the 
method. The mothod {code} def eval(args: Any*){code} will become {code}def 
eval(args: scala.collection.Seq){code} in the class file. 

The code generation has been done in Java. If we use java style 
{code}eval(Object... args){code} to call the Scala method, it will raise the 
above exception.

However, I can't always restrict users to use Java to write a UDF/UDTF. Any 
ideas in variable types and variable arguments of Scala UDF/UDTFs to prevent 
the compilation failure?


> UDF/UDTF should support variable types and variable arguments
> -
>
> Key: FLINK-5826
> URL: https://issues.apache.org/jira/browse/FLINK-5826
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
>
> In some cases, UDF/UDTF should support variable types and variable arguments. 
> Many UDF/UDTF developers wish to make the # of arguments and types flexible 
> to users. They try to make their functions flexible.
> Thus, we should support the following styles of UDF/UDTFs.
> for example 1, in Java
> {code:java}
> public class SimpleUDF extends ScalarFunction {
>   public int eval(Object... args) {
>   // do something
>   }
> }
> {code}
> for example 2, in Scala
> {code}
> class SimpleUDF extends ScalarFunction {
>   def eval(args: Any*): Int = {
>

[jira] [Comment Edited] (FLINK-5826) UDF/UDTF should support variable types and variable arguments

2017-02-16 Thread Zhuoluo Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871338#comment-15871338
 ] 

Zhuoluo Yang edited comment on FLINK-5826 at 2/17/17 7:33 AM:
--

I have already made a small modification in 
{code}UserDefinedFunctionUtils.getSignature(){code}. The basic idea is that we 
let the "Object..." and "Any*" pass and return the corresponding signature. 
This modification works for Java only. The Scala will fail. Since the code 
generation is to generate a Java codes, There will be some problem call {code} 
eval(Seq) {code} in generated Java. However, there will be no problem at 
all in calling {code}eval(Object... args}{code} in generated Java.


was (Author: clarkyzl):
I have already made a small modification in 
{code}UserDefinedFunctionUtils.getSignature(){code}. The basic idea is that we 
let the "Object..." and "Any*" pass and return the corresponding signature. 
This modification works for Java only. The Scala will fail. Since the code 
generation is to generate a Java codes, There will be some problem call {code} 
eval(Seq) {/code} in generated Java. However, there will be no problem at 
all in calling {code}eval(Object... args}{code} in generated Java.

> UDF/UDTF should support variable types and variable arguments
> -
>
> Key: FLINK-5826
> URL: https://issues.apache.org/jira/browse/FLINK-5826
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
>
> In some cases, UDF/UDTF should support variable types and variable arguments. 
> Many UDF/UDTF developers wish to make the # of arguments and types flexible 
> to users. They try to make their functions flexible.
> Thus, we should support the following styles of UDF/UDTFs.
> for example 1, in Java
> {code:java}
> public class SimpleUDF extends ScalarFunction {
>   public int eval(Object... args) {
>   // do something
>   }
> }
> {code}
> for example 2, in Scala
> {code}
> class SimpleUDF extends ScalarFunction {
>   def eval(args: Any*): Int = {
> // do something
>   }
> }
> {code}
> If we modify the code in UserDefinedFunctionUtils.getSignature() and make 
> both signatures pass. The first example will work normally. However, the 
> second example will raise an exception.
> {noformat}
> Caused by: org.codehaus.commons.compiler.CompileException: Line 58, Column 0: 
> No applicable constructor/method found for actual parameters 
> "java.lang.String"; candidates are: "public java.lang.Object 
> test.SimpleUDF.eval(scala.collection.Seq)"
>   at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11523) 
> ~[janino-3.0.6.jar:?]
>   at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8679)
>  ~[janino-3.0.6.jar:?]
>   at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8539) 
> ~[janino-3.0.6.jar:?]
> {noformat} 
> The reason is that Scala will do a sugary modification to the signature of 
> the method. The mothod {code} def eval(args: Any*){code} will become 
> {code}def eval(args: scala.collection.Seq){code} in the class file. 
> The code generation has been done in Java. If we use java style 
> {code}eval(Object... args){code} to call the Scala method, it will raise the 
> above exception.
> However, I can't always restrict users to use Java to write a UDF/UDTF. Any 
> ideas in variable types and variable arguments of Scala UDF/UDTFs to prevent 
> the compilation failure?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5826) UDF/UDTF should support variable types and variable arguments

2017-02-16 Thread Zhuoluo Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871338#comment-15871338
 ] 

Zhuoluo Yang commented on FLINK-5826:
-

I have already made a small modification in 
{code}UserDefinedFunctionUtils.getSignature(){code}. The basic idea is that we 
let the "Object..." and "Any*" pass and return the corresponding signature. 
This modification works for Java only. The Scala will fail. Since the code 
generation is to generate a Java codes, There will be some problem call {code} 
eval(Seq) {/code} in generated Java. However, there will be no problem at 
all in calling {code}eval(Object... args}{code} in generated Java.

> UDF/UDTF should support variable types and variable arguments
> -
>
> Key: FLINK-5826
> URL: https://issues.apache.org/jira/browse/FLINK-5826
> Project: Flink
>  Issue Type: Improvement
>Reporter: Zhuoluo Yang
>Assignee: Zhuoluo Yang
>
> In some cases, UDF/UDTF should support variable types and variable arguments. 
> Many UDF/UDTF developers wish to make the # of arguments and types flexible 
> to users. They try to make their functions flexible.
> Thus, we should support the following styles of UDF/UDTFs.
> for example 1, in Java
> {code:java}
> public class SimpleUDF extends ScalarFunction {
>   public int eval(Object... args) {
>   // do something
>   }
> }
> {code}
> for example 2, in Scala
> {code}
> class SimpleUDF extends ScalarFunction {
>   def eval(args: Any*): Int = {
> // do something
>   }
> }
> {code}
> If we modify the code in UserDefinedFunctionUtils.getSignature() and make 
> both signatures pass. The first example will work normally. However, the 
> second example will raise an exception.
> {noformat}
> Caused by: org.codehaus.commons.compiler.CompileException: Line 58, Column 0: 
> No applicable constructor/method found for actual parameters 
> "java.lang.String"; candidates are: "public java.lang.Object 
> test.SimpleUDF.eval(scala.collection.Seq)"
>   at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11523) 
> ~[janino-3.0.6.jar:?]
>   at 
> org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8679)
>  ~[janino-3.0.6.jar:?]
>   at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8539) 
> ~[janino-3.0.6.jar:?]
> {noformat} 
> The reason is that Scala will do a sugary modification to the signature of 
> the method. The mothod {code} def eval(args: Any*){code} will become 
> {code}def eval(args: scala.collection.Seq){code} in the class file. 
> The code generation has been done in Java. If we use java style 
> {code}eval(Object... args){code} to call the Scala method, it will raise the 
> above exception.
> However, I can't always restrict users to use Java to write a UDF/UDTF. Any 
> ideas in variable types and variable arguments of Scala UDF/UDTFs to prevent 
> the compilation failure?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5826) UDF/UDTF should support variable types and variable arguments

2017-02-16 Thread Zhuoluo Yang (JIRA)
Zhuoluo Yang created FLINK-5826:
---

 Summary: UDF/UDTF should support variable types and variable 
arguments
 Key: FLINK-5826
 URL: https://issues.apache.org/jira/browse/FLINK-5826
 Project: Flink
  Issue Type: Improvement
Reporter: Zhuoluo Yang
Assignee: Zhuoluo Yang


In some cases, UDF/UDTF should support variable types and variable arguments. 
Many UDF/UDTF developers wish to make the # of arguments and types flexible to 
users. They try to make their functions flexible.

Thus, we should support the following styles of UDF/UDTFs.

for example 1, in Java
{code:java}
public class SimpleUDF extends ScalarFunction {
public int eval(Object... args) {
// do something
}
}
{code}

for example 2, in Scala
{code}
class SimpleUDF extends ScalarFunction {
  def eval(args: Any*): Int = {
// do something
  }
}
{code}

If we modify the code in UserDefinedFunctionUtils.getSignature() and make both 
signatures pass. The first example will work normally. However, the second 
example will raise an exception.

{noformat}
Caused by: org.codehaus.commons.compiler.CompileException: Line 58, Column 0: 
No applicable constructor/method found for actual parameters 
"java.lang.String"; candidates are: "public java.lang.Object 
test.SimpleUDF.eval(scala.collection.Seq)"
  at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11523) 
~[janino-3.0.6.jar:?]
  at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8679)
 ~[janino-3.0.6.jar:?]
  at org.codehaus.janino.UnitCompiler.findIMethod(UnitCompiler.java:8539) 
~[janino-3.0.6.jar:?]
{noformat} 

The reason is that Scala will do a sugary modification to the signature of the 
method. The mothod {code} def eval(args: Any*){code} will become {code}def 
eval(args: scala.collection.Seq){code} in the class file. 

The code generation has been done in Java. If we use java style 
{code}eval(Object... args){code} to call the Scala method, it will raise the 
above exception.

However, I can't always restrict users to use Java to write a UDF/UDTF. Any 
ideas in variable types and variable arguments of Scala UDF/UDTFs to prevent 
the compilation failure?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5825) In yarn mode, a small pic can not be loaded

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871299#comment-15871299
 ] 

ASF GitHub Bot commented on FLINK-5825:
---

GitHub user WangTaoTheTonic opened a pull request:

https://github.com/apache/flink/pull/3337

[FLINK-5825][UI]make the cite path relative to show it correctly

In yarn mode, the web frontend url is accessed from yarn in format like 
"http://spark-91-206:8088/proxy/application_1487122678902_0015/;, and the 
running job page's url is 
"http://spark-91-206:8088/proxy/application_1487122678902_0015/#/jobs/9440a129ea5899c16e7c1a7e8f2897b3;.
One .png file called "horizontal.png", which is very small can not be 
loaded in that mode, because in "index.styl" it is cited as absolute path.
We should make the path relative.

- [ ] Tests & Build
LATER I will paste difference between before and after.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/WangTaoTheTonic/flink FLINK-5825

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3337.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3337


commit c2e2711b2f815a4aa1b9c8be2478c963c41da245
Author: WangTaoTheTonic 
Date:   2017-02-17T06:48:14Z

make the cite path relative to show it correctly




> In yarn mode, a small pic can not be loaded
> ---
>
> Key: FLINK-5825
> URL: https://issues.apache.org/jira/browse/FLINK-5825
> Project: Flink
>  Issue Type: Bug
>  Components: Webfrontend, YARN
>Reporter: Tao Wang
>Priority: Minor
>
> In yarn mode, the web frontend url is accessed from yarn in format like 
> "http://spark-91-206:8088/proxy/application_1487122678902_0015/;, and the 
> running job page's url is 
> "http://spark-91-206:8088/proxy/application_1487122678902_0015/#/jobs/9440a129ea5899c16e7c1a7e8f2897b3;.
> One .png file called "horizontal.png", which is very small can not be loaded 
> in that mode, because in "index.styl" it is cited as absolute path.
> We should make the path relative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3337: [FLINK-5825][UI]make the cite path relative to sho...

2017-02-16 Thread WangTaoTheTonic
GitHub user WangTaoTheTonic opened a pull request:

https://github.com/apache/flink/pull/3337

[FLINK-5825][UI]make the cite path relative to show it correctly

In yarn mode, the web frontend url is accessed from yarn in format like 
"http://spark-91-206:8088/proxy/application_1487122678902_0015/;, and the 
running job page's url is 
"http://spark-91-206:8088/proxy/application_1487122678902_0015/#/jobs/9440a129ea5899c16e7c1a7e8f2897b3;.
One .png file called "horizontal.png", which is very small can not be 
loaded in that mode, because in "index.styl" it is cited as absolute path.
We should make the path relative.

- [ ] Tests & Build
LATER I will paste difference between before and after.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/WangTaoTheTonic/flink FLINK-5825

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3337.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3337


commit c2e2711b2f815a4aa1b9c8be2478c963c41da245
Author: WangTaoTheTonic 
Date:   2017-02-17T06:48:14Z

make the cite path relative to show it correctly




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5825) In yarn mode, a small pic can not be loaded

2017-02-16 Thread Tao Wang (JIRA)
Tao Wang created FLINK-5825:
---

 Summary: In yarn mode, a small pic can not be loaded
 Key: FLINK-5825
 URL: https://issues.apache.org/jira/browse/FLINK-5825
 Project: Flink
  Issue Type: Bug
  Components: Webfrontend, YARN
Reporter: Tao Wang
Priority: Minor


In yarn mode, the web frontend url is accessed from yarn in format like 
"http://spark-91-206:8088/proxy/application_1487122678902_0015/;, and the 
running job page's url is 
"http://spark-91-206:8088/proxy/application_1487122678902_0015/#/jobs/9440a129ea5899c16e7c1a7e8f2897b3;.

One .png file called "horizontal.png", which is very small can not be loaded in 
that mode, because in "index.styl" it is cited as absolute path.

We should make the path relative.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5792) Improve “UDF/UDTF" to support constructor with parameter.

2017-02-16 Thread Zhuoluo Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871230#comment-15871230
 ] 

Zhuoluo Yang commented on FLINK-5792:
-

Hi [~sunjincheng121], I think this feature is very important to FLINK-5802. 
Because we need to pass something (eg, Hive Udf ) to the Flink's UDF/UDTF. A 
serialization will be a good idea, IMHO. 

> Improve “UDF/UDTF" to support constructor with parameter.
> -
>
> Key: FLINK-5792
> URL: https://issues.apache.org/jira/browse/FLINK-5792
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently UDF/UDTF in the codegen phase using a nonparametric constructor to 
> create the instance, causing the user can not include the state value in the 
> UDF/UDTF. The UDF/UDTF's codegen phase can use a serialized mechanism so that 
> the UDTF can contain state values.
> 1. UserDefinedFunction inherits Serializable.
> 2. Modify CodeGenerator about UDF/UDTF part.
> 3. Modify TableAPI about UDF/UDTF
> 4. Add Test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5414) Bump up Calcite version to 1.11

2017-02-16 Thread Jark Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871154#comment-15871154
 ] 

Jark Wu commented on FLINK-5414:


Hi [~wheat9], sorry for the delay. Actually, I had some problems when working 
on this issue. 

I will try to solve the problem today, and update my progress here. 

> Bump up Calcite version to 1.11
> ---
>
> Key: FLINK-5414
> URL: https://issues.apache.org/jira/browse/FLINK-5414
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> The upcoming Calcite release 1.11 has a lot of stability fixes and new 
> features. We should update it for the Table API.
> E.g. we can hopefully merge FLINK-4864



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3336: [FLINK-4856][state] Add MapState in KeyedState

2017-02-16 Thread shixiaogang
GitHub user shixiaogang opened a pull request:

https://github.com/apache/flink/pull/3336

[FLINK-4856][state] Add MapState in KeyedState

1. Add `MapState` and `MapStateDescriptor`
2. Implementation of `MapState` in `HeapKeyedStateBackend` and 
`RocksDBKeyedStateBackend`.
3. Add accessors to `MapState` in `RuntimeContext`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alibaba/flink flink-4856

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3336


commit 430b4f596acbff0a9dfdc20fbb2430a8fad819f9
Author: xiaogang.sxg 
Date:   2017-02-17T03:19:18Z

Add MapState in KeyedState




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4856) Add MapState for keyed streams

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871118#comment-15871118
 ] 

ASF GitHub Bot commented on FLINK-4856:
---

GitHub user shixiaogang opened a pull request:

https://github.com/apache/flink/pull/3336

[FLINK-4856][state] Add MapState in KeyedState

1. Add `MapState` and `MapStateDescriptor`
2. Implementation of `MapState` in `HeapKeyedStateBackend` and 
`RocksDBKeyedStateBackend`.
3. Add accessors to `MapState` in `RuntimeContext`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alibaba/flink flink-4856

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/3336.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3336


commit 430b4f596acbff0a9dfdc20fbb2430a8fad819f9
Author: xiaogang.sxg 
Date:   2017-02-17T03:19:18Z

Add MapState in KeyedState




> Add MapState for keyed streams
> --
>
> Key: FLINK-4856
> URL: https://issues.apache.org/jira/browse/FLINK-4856
> Project: Flink
>  Issue Type: New Feature
>  Components: State Backends, Checkpointing
>Reporter: Xiaogang Shi
>Assignee: Xiaogang Shi
>
> Many states in keyed streams are organized as key-value pairs. Currently, 
> these states are implemented by storing the entire map into a ValueState or a 
> ListState. The implementation however is very costly because all entries have 
> to be serialized/deserialized when updating a single entry. To improve the 
> efficiency of these states, MapStates are urgently needed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5751) 404 in documentation

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871105#comment-15871105
 ] 

ASF GitHub Bot commented on FLINK-5751:
---

Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3332#discussion_r101678241
  
--- Diff: docs/check_links.sh ---
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash

+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# Don't abort on any non-zero exit code
+#set +e
+
+target=${1:-"http://localhost:4000"}
+
+# Crawl the docs, ignoring robots.txt, storing nothing locally
+wget --spider -r -nd -nv -e robots=off -p -o spider.log $target
--- End diff --

$target in quotes


> 404 in documentation
> 
>
> Key: FLINK-5751
> URL: https://issues.apache.org/jira/browse/FLINK-5751
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Colin Breame
>Priority: Trivial
> Fix For: 1.3.0, 1.2.1
>
>
> This page:
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html
> Contains a link with title "Flink on Windows" with URL:
> - 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/flink_on_windows
> This gives a 404.  It should be:
> - 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/flink_on_windows.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3332: [FLINK-5751] [docs] Add link check script

2017-02-16 Thread patricklucas
Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3332#discussion_r101678217
  
--- Diff: docs/check_links.sh ---
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash

+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# Don't abort on any non-zero exit code
+#set +e
--- End diff --

This is a vestige from me writing this as a Jenkins jobs; you can remove 
this line and the above comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5751) 404 in documentation

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871106#comment-15871106
 ] 

ASF GitHub Bot commented on FLINK-5751:
---

Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3332#discussion_r101678217
  
--- Diff: docs/check_links.sh ---
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash

+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# Don't abort on any non-zero exit code
+#set +e
--- End diff --

This is a vestige from me writing this as a Jenkins jobs; you can remove 
this line and the above comment.


> 404 in documentation
> 
>
> Key: FLINK-5751
> URL: https://issues.apache.org/jira/browse/FLINK-5751
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Colin Breame
>Priority: Trivial
> Fix For: 1.3.0, 1.2.1
>
>
> This page:
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html
> Contains a link with title "Flink on Windows" with URL:
> - 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/flink_on_windows
> This gives a 404.  It should be:
> - 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/flink_on_windows.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3332: [FLINK-5751] [docs] Add link check script

2017-02-16 Thread patricklucas
Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3332#discussion_r101678631
  
--- Diff: docs/check_links.sh ---
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash

+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# Don't abort on any non-zero exit code
+#set +e
+
+target=${1:-"http://localhost:4000"}
+
+# Crawl the docs, ignoring robots.txt, storing nothing locally
+wget --spider -r -nd -nv -e robots=off -p -o spider.log $target
+
+# Abort for anything other than 0 and 4 ("Network failure")
+status=$?
+if [ $status -ne 0 ] && [ $status -ne 4 ]; then
+exit $status
+fi
+
+# Fail the build if any broken links are found
+broken_links_str=$(grep -e 'Found [[:digit:]]\+ broken link' spider.log)
+echo -e "$broken_links_str"
--- End diff --

If you're going to get rid of my pretty bold-red formatting you can drop 
the `-e` too. :)

Actually, might as well put the `echo` line inside the `if` too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3332: [FLINK-5751] [docs] Add link check script

2017-02-16 Thread patricklucas
Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3332#discussion_r101678241
  
--- Diff: docs/check_links.sh ---
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash

+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# Don't abort on any non-zero exit code
+#set +e
+
+target=${1:-"http://localhost:4000"}
+
+# Crawl the docs, ignoring robots.txt, storing nothing locally
+wget --spider -r -nd -nv -e robots=off -p -o spider.log $target
--- End diff --

$target in quotes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5751) 404 in documentation

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871107#comment-15871107
 ] 

ASF GitHub Bot commented on FLINK-5751:
---

Github user patricklucas commented on a diff in the pull request:

https://github.com/apache/flink/pull/3332#discussion_r101678631
  
--- Diff: docs/check_links.sh ---
@@ -0,0 +1,39 @@
+#!/usr/bin/env bash

+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.

+
+
+# Don't abort on any non-zero exit code
+#set +e
+
+target=${1:-"http://localhost:4000"}
+
+# Crawl the docs, ignoring robots.txt, storing nothing locally
+wget --spider -r -nd -nv -e robots=off -p -o spider.log $target
+
+# Abort for anything other than 0 and 4 ("Network failure")
+status=$?
+if [ $status -ne 0 ] && [ $status -ne 4 ]; then
+exit $status
+fi
+
+# Fail the build if any broken links are found
+broken_links_str=$(grep -e 'Found [[:digit:]]\+ broken link' spider.log)
+echo -e "$broken_links_str"
--- End diff --

If you're going to get rid of my pretty bold-red formatting you can drop 
the `-e` too. :)

Actually, might as well put the `echo` line inside the `if` too.


> 404 in documentation
> 
>
> Key: FLINK-5751
> URL: https://issues.apache.org/jira/browse/FLINK-5751
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Colin Breame
>Priority: Trivial
> Fix For: 1.3.0, 1.2.1
>
>
> This page:
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html
> Contains a link with title "Flink on Windows" with URL:
> - 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/flink_on_windows
> This gives a 404.  It should be:
> - 
> https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/flink_on_windows.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5817) Fix test concurrent execution failure by test dir conflicts.

2017-02-16 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871082#comment-15871082
 ] 

shijinkui commented on FLINK-5817:
--

[~StephanEwen] [~wenlong.lwl], there no overlapping between FLINK-5546 and 
FLINK-5817. In FLINK-5546, I'm focus on changing the default java.io.tmp system 
property.
I think this issue can find out all the temporary directory creating by new 
File, and replace all with TemporaryFolder following Stephan's suggestion.
You can find out all the *Test.* files and search the keyword "new File(", then 
you'll find there so much bad smell need to be re-corrected.

> Fix test concurrent execution failure by test dir conflicts.
> 
>
> Key: FLINK-5817
> URL: https://issues.apache.org/jira/browse/FLINK-5817
> Project: Flink
>  Issue Type: Bug
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>
> Currently when different users build flink on the same machine, failure may 
> happen because some test utilities create test file using the fixed name, 
> which will cause file access failing when different user processing the same 
> file at the same time.
> We have found errors from AbstractTestBase, IOManagerTest, FileCacheTest.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5818) change checkpoint dir permission to 700 for security reason

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871046#comment-15871046
 ] 

ASF GitHub Bot commented on FLINK-5818:
---

Github user WangTaoTheTonic commented on the issue:

https://github.com/apache/flink/pull/3335
  
Hi @greghogan , I'm not sure I understand the relationship between HDFS 
ACLs and this change I proposed. Could you explain more specifically? Thanks.


> change checkpoint dir permission to 700 for security reason
> ---
>
> Key: FLINK-5818
> URL: https://issues.apache.org/jira/browse/FLINK-5818
> Project: Flink
>  Issue Type: Improvement
>  Components: Security, State Backends, Checkpointing
>Reporter: Tao Wang
>
> Now checkpoint directory is made w/o specified permission, so it is easy for 
> another user to delete or read files under it, which will cause restore 
> failure or information leak.
> It's better to lower it down to 700.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3335: [FLINK-5818][Security]change checkpoint dir permission to...

2017-02-16 Thread WangTaoTheTonic
Github user WangTaoTheTonic commented on the issue:

https://github.com/apache/flink/pull/3335
  
Hi @greghogan , I'm not sure I understand the relationship between HDFS 
ACLs and this change I proposed. Could you explain more specifically? Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5183) [py] Support multiple jobs per Python plan file

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870581#comment-15870581
 ] 

ASF GitHub Bot commented on FLINK-5183:
---

Github user GEOFBOT commented on the issue:

https://github.com/apache/flink/pull/3232
  
> It may have worked with a smaller file, but there may be issues with 
heavier jobs.

How silly of me. This problem had nothing to do with this pull request, 
with YARN, with issues in Flink, or with the size of the input file at all. I 
was using `ExecutionEnvironment.from_elements` to generate a large sequence of 
indexed zeroes to fill in the gaps of another indexed DataSet with zeroes. 
However, when I was using large input files, I set larger parameters and 
generated larger zero sequences. Because I was using `from_elements`, the 
client needed to send all of those values (lots and lots of zeroes) to the 
runtime, which was very time-consuming and caused the timeout. I have replaced 
this with a `generate_sequence` call and a map function, which does not require 
sending lots and lots of values from the client to the runtime, and the job 
(and this pull request) seem to work just fine.

(change in question: 
https://github.com/quinngroup/pyflink-r1dl/commit/00a16d564bfad21fc1f4958677ada0a95fa9f088)


> [py] Support multiple jobs per Python plan file
> ---
>
> Key: FLINK-5183
> URL: https://issues.apache.org/jira/browse/FLINK-5183
> Project: Flink
>  Issue Type: Improvement
>  Components: Python API
>Affects Versions: 1.1.3
>Reporter: Geoffrey Mon
>Priority: Minor
>
> Support running multiple jobs per Python plan file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3232: [FLINK-5183] [py] Support mulitple jobs per plan file

2017-02-16 Thread GEOFBOT
Github user GEOFBOT commented on the issue:

https://github.com/apache/flink/pull/3232
  
> It may have worked with a smaller file, but there may be issues with 
heavier jobs.

How silly of me. This problem had nothing to do with this pull request, 
with YARN, with issues in Flink, or with the size of the input file at all. I 
was using `ExecutionEnvironment.from_elements` to generate a large sequence of 
indexed zeroes to fill in the gaps of another indexed DataSet with zeroes. 
However, when I was using large input files, I set larger parameters and 
generated larger zero sequences. Because I was using `from_elements`, the 
client needed to send all of those values (lots and lots of zeroes) to the 
runtime, which was very time-consuming and caused the timeout. I have replaced 
this with a `generate_sequence` call and a map function, which does not require 
sending lots and lots of values from the client to the runtime, and the job 
(and this pull request) seem to work just fine.

(change in question: 
https://github.com/quinngroup/pyflink-r1dl/commit/00a16d564bfad21fc1f4958677ada0a95fa9f088)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5414) Bump up Calcite version to 1.11

2017-02-16 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870558#comment-15870558
 ] 

Haohui Mai commented on FLINK-5414:
---

[~jark], are there any progresses on this jira? We are interested in this as 
well. If you're busy I'm happy to help on this.

> Bump up Calcite version to 1.11
> ---
>
> Key: FLINK-5414
> URL: https://issues.apache.org/jira/browse/FLINK-5414
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Jark Wu
>
> The upcoming Calcite release 1.11 has a lot of stability fixes and new 
> features. We should update it for the Table API.
> E.g. we can hopefully merge FLINK-4864



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3302: [FLINK-5710] Add ProcTime() function to indicate StreamSQ...

2017-02-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3302
  
FYI: PR #3252 was just merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5710) Add ProcTime() function to indicate StreamSQL

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870346#comment-15870346
 ] 

ASF GitHub Bot commented on FLINK-5710:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3302
  
FYI: PR #3252 was just merged.


> Add ProcTime() function to indicate StreamSQL
> -
>
> Key: FLINK-5710
> URL: https://issues.apache.org/jira/browse/FLINK-5710
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API
>Reporter: Stefano Bortoli
>Assignee: Stefano Bortoli
>Priority: Minor
>
> procTime() is a parameterless scalar function that just indicates processing 
> time mode



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5624) Support tumbling window on streaming tables in the SQL API

2017-02-16 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-5624.

   Resolution: Implemented
Fix Version/s: 1.3.0

Implemented for 1.3.0 with 8304f3e159851d29691e66cacfcb4278d73a8b97

> Support tumbling window on streaming tables in the SQL API
> --
>
> Key: FLINK-5624
> URL: https://issues.apache.org/jira/browse/FLINK-5624
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 1.3.0
>
>
> This is a follow up of FLINK-4691.
> FLINK-4691 adds supports for group-windows for streaming tables. This jira 
> proposes to expose the functionality in the SQL layer via the {{GROUP BY}} 
> clauses, as described in 
> http://calcite.apache.org/docs/stream.html#tumbling-windows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5624) Support tumbling window on streaming tables in the SQL API

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870339#comment-15870339
 ] 

ASF GitHub Bot commented on FLINK-5624:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3252


> Support tumbling window on streaming tables in the SQL API
> --
>
> Key: FLINK-5624
> URL: https://issues.apache.org/jira/browse/FLINK-5624
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This is a follow up of FLINK-4691.
> FLINK-4691 adds supports for group-windows for streaming tables. This jira 
> proposes to expose the functionality in the SQL layer via the {{GROUP BY}} 
> clauses, as described in 
> http://calcite.apache.org/docs/stream.html#tumbling-windows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3252: [FLINK-5624] Support tumbling window on streaming ...

2017-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3252


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3322: [FLINK-4813][flink-test-utils] make the hadoop-minikdc de...

2017-02-16 Thread NicoK
Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/3322
  
sure, that makes sense
actually, I only had to add it to the flink-test-utils sub-project since 
all the others already included the bundler :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4813) Having flink-test-utils as a dependency outside Flink fails the build

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870240#comment-15870240
 ] 

ASF GitHub Bot commented on FLINK-4813:
---

Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/3322#discussion_r101565321
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/SecureTestEnvironment.java
 ---
@@ -37,9 +37,39 @@
 /**
  * Helper {@link SecureTestEnvironment} to handle MiniKDC lifecycle.
  * This class can be used to start/stop MiniKDC and create secure 
configurations for MiniDFSCluster
- * and MiniYarn
+ * and MiniYarn.
+ *
+ * If you use this class in your project, please make sure to add a 
dependency to
+ * hadoop-minikdc, e.g. in your pom.xml:
+ * 
+ * ...
+ * dependencies>
--- End diff --

If you do the example code as follows, you can use the `<` and make the 
sample more readable.
```
{@code


org.apache.hadoop
...


}
```


> Having flink-test-utils as a dependency outside Flink fails the build
> -
>
> Key: FLINK-4813
> URL: https://issues.apache.org/jira/browse/FLINK-4813
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Nico Kruber
>
> The {{flink-test-utils}} depend on {{hadoop-minikdc}}, which has a 
> dependency, which is only resolvable, if the {{maven-bundle-plugin}} is 
> loaded.
> This is the error message
> {code}
> [ERROR] Failed to execute goal on project quickstart-1.2-tests: Could not 
> resolve dependencies for project 
> com.dataartisans:quickstart-1.2-tests:jar:1.0-SNAPSHOT: Failure to find 
> org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> {{flink-parent}} loads that plugin, so all "internal" dependencies to the 
> test utils can resolve the plugin.
> Right now, users have to use the maven bundle plugin to use our test utils 
> externally.
> By making the hadoop minikdc dependency optional, we can probably resolve the 
> issues. Then, only users who want to use the security-related tools in the 
> test utils need to manually add the hadoop minikdc dependency + the plugin.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3322: [FLINK-4813][flink-test-utils] make the hadoop-min...

2017-02-16 Thread StephanEwen
Github user StephanEwen commented on a diff in the pull request:

https://github.com/apache/flink/pull/3322#discussion_r101565321
  
--- Diff: 
flink-test-utils-parent/flink-test-utils/src/main/java/org/apache/flink/test/util/SecureTestEnvironment.java
 ---
@@ -37,9 +37,39 @@
 /**
  * Helper {@link SecureTestEnvironment} to handle MiniKDC lifecycle.
  * This class can be used to start/stop MiniKDC and create secure 
configurations for MiniDFSCluster
- * and MiniYarn
+ * and MiniYarn.
+ *
+ * If you use this class in your project, please make sure to add a 
dependency to
+ * hadoop-minikdc, e.g. in your pom.xml:
+ * 
+ * ...
+ * dependencies>
--- End diff --

If you do the example code as follows, you can use the `<` and make the 
sample more readable.
```
{@code


org.apache.hadoop
...


}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4813) Having flink-test-utils as a dependency outside Flink fails the build

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870238#comment-15870238
 ] 

ASF GitHub Bot commented on FLINK-4813:
---

Github user NicoK commented on the issue:

https://github.com/apache/flink/pull/3322
  
sure, that makes sense
actually, I only had to add it to the flink-test-utils sub-project since 
all the others already included the bundler :)


> Having flink-test-utils as a dependency outside Flink fails the build
> -
>
> Key: FLINK-4813
> URL: https://issues.apache.org/jira/browse/FLINK-4813
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Nico Kruber
>
> The {{flink-test-utils}} depend on {{hadoop-minikdc}}, which has a 
> dependency, which is only resolvable, if the {{maven-bundle-plugin}} is 
> loaded.
> This is the error message
> {code}
> [ERROR] Failed to execute goal on project quickstart-1.2-tests: Could not 
> resolve dependencies for project 
> com.dataartisans:quickstart-1.2-tests:jar:1.0-SNAPSHOT: Failure to find 
> org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> {{flink-parent}} loads that plugin, so all "internal" dependencies to the 
> test utils can resolve the plugin.
> Right now, users have to use the maven bundle plugin to use our test utils 
> externally.
> By making the hadoop minikdc dependency optional, we can probably resolve the 
> issues. Then, only users who want to use the security-related tools in the 
> test utils need to manually add the hadoop minikdc dependency + the plugin.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-1725) New Partitioner for better load balancing for skewed data

2017-02-16 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870236#comment-15870236
 ] 

Stephan Ewen commented on FLINK-1725:
-

I would like to make this user-driven. While it sounds like an interesting 
idea, so far we have not had a request from a user that they need something 
like that.
Given that we have to maintain it forever once it is in, and that it is 
something that users need to actively use (it is not some magic improvement 
under the hood) it would be nice to see that users actually want to use it 
before we commit to the maintenance.

> New Partitioner for better load balancing for skewed data
> -
>
> Key: FLINK-1725
> URL: https://issues.apache.org/jira/browse/FLINK-1725
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 0.8.1
>Reporter: Anis Nasir
>Assignee: Anis Nasir
>  Labels: LoadBalancing, Partitioner
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Hi,
> We have recently studied the problem of load balancing in Storm [1].
> In particular, we focused on key distribution of the stream for skewed data.
> We developed a new stream partitioning scheme (which we call Partial Key 
> Grouping). It achieves better load balancing than key grouping while being 
> more scalable than shuffle grouping in terms of memory.
> In the paper we show a number of mining algorithms that are easy to implement 
> with partial key grouping, and whose performance can benefit from it. We 
> think that it might also be useful for a larger class of algorithms.
> Partial key grouping is very easy to implement: it requires just a few lines 
> of code in Java when implemented as a custom grouping in Storm [2].
> For all these reasons, we believe it will be a nice addition to the standard 
> Partitioners available in Flink. If the community thinks it's a good idea, we 
> will be happy to offer support in the porting.
> References:
> [1]. 
> https://melmeric.files.wordpress.com/2014/11/the-power-of-both-choices-practical-load-balancing-for-distributed-stream-processing-engines.pdf
> [2]. https://github.com/gdfm/partial-key-grouping



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5799) Let RpcService.scheduleRunnable return ScheduledFuture

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870221#comment-15870221
 ] 

ASF GitHub Bot commented on FLINK-5799:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3311
  
Thanks for the review @StephanEwen.

You mean something like the `Executor` interface with the additional 
methods for the scheduled execution. Differently said, removing the `Service` 
methods like `shutdown`, `shutdownNow` and `awaitTermination` from the 
interface? 

That is a good idea. If we should stumble across the problem that a 
component needs an actual `ScheduledExecutorService` implementation, then we 
can still have a wrapper which throws an `UnsupportedOperationException` when 
calling the respective methods. I will adapt the PR in that way.


> Let RpcService.scheduleRunnable return ScheduledFuture
> --
>
> Key: FLINK-5799
> URL: https://issues.apache.org/jira/browse/FLINK-5799
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
> Fix For: 1.3.0
>
>
> Currently, the method {{RpcService.scheduleRunnable}} does not return a 
> control instance for the scheduled runnable. I think it would be good to 
> return a {{ScheduledFuture}} with which one can cancel the scheduled runnable 
> after it has been scheduled, e.g. a timeout registration which became 
> obsolete. This API is also more in line with the {{ScheduledExecutorService}} 
> where one also receives a {{ScheduledFuture}} after scheduling a runnable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5640) configure the explicit Unit Test file suffix

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870230#comment-15870230
 ] 

ASF GitHub Bot commented on FLINK-5640:
---

Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3211
  
@StephanEwen have finished that.


> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3211: [FLINK-5640][build]configure the explicit Unit Test file ...

2017-02-16 Thread shijinkui
Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3211
  
@StephanEwen have finished that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5710) Add ProcTime() function to indicate StreamSQL

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870226#comment-15870226
 ] 

ASF GitHub Bot commented on FLINK-5710:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3302
  
Hi @huawei-flink,

thanks for the update! The changes are now pretty much aligned with #3252. 
I'm in the process of merging #3252 (running last tests). It would be great if 
you could rebase your changes on top of the master once #3252 has been merged.

In order to test the feature, you can integrate `proctime()` into the 
`LogicalWindowAggregateRule` and extend the `WindowAggregateTest` for 
processing time tumbling windows.

Thanks, Fabian


> Add ProcTime() function to indicate StreamSQL
> -
>
> Key: FLINK-5710
> URL: https://issues.apache.org/jira/browse/FLINK-5710
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API
>Reporter: Stefano Bortoli
>Assignee: Stefano Bortoli
>Priority: Minor
>
> procTime() is a parameterless scalar function that just indicates processing 
> time mode



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3302: [FLINK-5710] Add ProcTime() function to indicate StreamSQ...

2017-02-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3302
  
Hi @huawei-flink,

thanks for the update! The changes are now pretty much aligned with #3252. 
I'm in the process of merging #3252 (running last tests). It would be great if 
you could rebase your changes on top of the master once #3252 has been merged.

In order to test the feature, you can integrate `proctime()` into the 
`LogicalWindowAggregateRule` and extend the `WindowAggregateTest` for 
processing time tumbling windows.

Thanks, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3311: [FLINK-5799] Let RpcService.scheduleRunnable return a Sch...

2017-02-16 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3311
  
Thanks for the review @StephanEwen.

You mean something like the `Executor` interface with the additional 
methods for the scheduled execution. Differently said, removing the `Service` 
methods like `shutdown`, `shutdownNow` and `awaitTermination` from the 
interface? 

That is a good idea. If we should stumble across the problem that a 
component needs an actual `ScheduledExecutorService` implementation, then we 
can still have a wrapper which throws an `UnsupportedOperationException` when 
calling the respective methods. I will adapt the PR in that way.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5624) Support tumbling window on streaming tables in the SQL API

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870210#comment-15870210
 ] 

ASF GitHub Bot commented on FLINK-5624:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3252
  
Merging


> Support tumbling window on streaming tables in the SQL API
> --
>
> Key: FLINK-5624
> URL: https://issues.apache.org/jira/browse/FLINK-5624
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This is a follow up of FLINK-4691.
> FLINK-4691 adds supports for group-windows for streaming tables. This jira 
> proposes to expose the functionality in the SQL layer via the {{GROUP BY}} 
> clauses, as described in 
> http://calcite.apache.org/docs/stream.html#tumbling-windows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-5814) flink-dist creates wrong symlink when not used with cleaned before

2017-02-16 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-5814.
-
   Resolution: Fixed
Fix Version/s: 1.2.1
   1.3.0

Fixed in
  - 1.2.1 via 6114c5b01d60d37efdd7db47bf9378f8dea4385c
  - 1.3.0 via 2ec2abfae58102af2d29ac65ac907f114ade4839

> flink-dist creates wrong symlink when not used with cleaned before
> --
>
> Key: FLINK-5814
> URL: https://issues.apache.org/jira/browse/FLINK-5814
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Minor
> Fix For: 1.3.0, 1.2.1
>
>
> If {{/build-target}} already exists, 'mvn package' for flink-dist  
> will create a symbolic link *inside* {{/build-target}} instead of 
> replacing that symlink. This is due to the behaviour of {{ln \-sf}} for 
> target links that point to directories and may be solved by adding the 
> {{--no-dereference}} parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5814) flink-dist creates wrong symlink when not used with cleaned before

2017-02-16 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-5814.
---

> flink-dist creates wrong symlink when not used with cleaned before
> --
>
> Key: FLINK-5814
> URL: https://issues.apache.org/jira/browse/FLINK-5814
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Minor
> Fix For: 1.3.0, 1.2.1
>
>
> If {{/build-target}} already exists, 'mvn package' for flink-dist  
> will create a symbolic link *inside* {{/build-target}} instead of 
> replacing that symlink. This is due to the behaviour of {{ln \-sf}} for 
> target links that point to directories and may be solved by adding the 
> {{--no-dereference}} parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3257: [FLINK-5705] [WebMonitor] request/response use UTF...

2017-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3257


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3252: [FLINK-5624] Support tumbling window on streaming tables ...

2017-02-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3252
  
Merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5705) webmonitor's request/response use UTF-8 explicitly

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870205#comment-15870205
 ] 

ASF GitHub Bot commented on FLINK-5705:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3257


> webmonitor's request/response use UTF-8 explicitly
> --
>
> Key: FLINK-5705
> URL: https://issues.apache.org/jira/browse/FLINK-5705
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.3.0, 1.2.1
>
>
> QueryStringDecoder and HttpPostRequestDecoder use UTF-8 defined in flink.
> Response set content-encoding header with utf-8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5705) webmonitor's request/response use UTF-8 explicitly

2017-02-16 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen closed FLINK-5705.
---

> webmonitor's request/response use UTF-8 explicitly
> --
>
> Key: FLINK-5705
> URL: https://issues.apache.org/jira/browse/FLINK-5705
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.3.0, 1.2.1
>
>
> QueryStringDecoder and HttpPostRequestDecoder use UTF-8 defined in flink.
> Response set content-encoding header with utf-8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (FLINK-5705) webmonitor's request/response use UTF-8 explicitly

2017-02-16 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen resolved FLINK-5705.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed in
  - 1.2.1 via d3f2fe2625171f89404e1b90fa8c9493f5403b3a
  - 1.3.0 via f24514339c78d809a28731fa18e8df638b382e3b

> webmonitor's request/response use UTF-8 explicitly
> --
>
> Key: FLINK-5705
> URL: https://issues.apache.org/jira/browse/FLINK-5705
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.3.0, 1.2.1
>
>
> QueryStringDecoder and HttpPostRequestDecoder use UTF-8 defined in flink.
> Response set content-encoding header with utf-8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5811) Harden YarnClusterDescriptorTest

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870201#comment-15870201
 ] 

ASF GitHub Bot commented on FLINK-5811:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3326


> Harden YarnClusterDescriptorTest
> 
>
> Key: FLINK-5811
> URL: https://issues.apache.org/jira/browse/FLINK-5811
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
> Fix For: 1.3.0
>
>
> The {{YarnClusterDescriptorTest}} is expecting in some test cases an 
> {{Exception}} but does not fail if the exception is not thrown. Moreover it 
> prints the stack trace of the expected exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3326: [FLINK-5811] [tests] Harden YarnClusterDescriptorT...

2017-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3326


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5640) configure the explicit Unit Test file suffix

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870199#comment-15870199
 ] 

ASF GitHub Bot commented on FLINK-5640:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3211
  
@shijinkui I think the Unite test include pattern is a good change, would 
merge that.

Can we exclude the changes to the compiler configuration from the initial 
change?


> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5811) Harden YarnClusterDescriptorTest

2017-02-16 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-5811.

   Resolution: Fixed
Fix Version/s: 1.3.0

Fixed via 494edb041b5e8474c84eed563e9dfa4406240bb5

> Harden YarnClusterDescriptorTest
> 
>
> Key: FLINK-5811
> URL: https://issues.apache.org/jira/browse/FLINK-5811
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
> Fix For: 1.3.0
>
>
> The {{YarnClusterDescriptorTest}} is expecting in some test cases an 
> {{Exception}} but does not fail if the exception is not thrown. Moreover it 
> prints the stack trace of the expected exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3211: [FLINK-5640][build]configure the explicit Unit Test file ...

2017-02-16 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3211
  
@shijinkui I think the Unite test include pattern is a good change, would 
merge that.

Can we exclude the changes to the compiler configuration from the initial 
change?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (FLINK-5773) Cannot cast scala.util.Failure to org.apache.flink.runtime.messages.Acknowledge

2017-02-16 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann resolved FLINK-5773.
--
Resolution: Fixed

1.3.0: 413609d13fcf924fa8581450618bccc6abdbbda0
1.2.1: a2853ec1527dd848d635d9cb7720b2bd21c7e3aa

> Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
> ---
>
> Key: FLINK-5773
> URL: https://issues.apache.org/jira/browse/FLINK-5773
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Colin Breame
>Assignee: Till Rohrmann
> Fix For: 1.3.0, 1.2.1
>
>
> The exception below happens when I set the 
> StreamExecutionEnvironment.setMaxParallelism() to anything less than 4.
> Let me know if you need more information.
> {code}
> Caused by: java.lang.ClassCastException: Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
>   at java.lang.Class.cast(Class.java:3369)
>   at scala.concurrent.Future$$anonfun$mapTo$1.apply(Future.scala:405)
>   at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
>   at scala.util.Try$.apply(Try.scala:161)
>   at scala.util.Success.map(Try.scala:206)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.submitTask(TaskManager.scala:1206)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:458)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:280)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5811) Harden YarnClusterDescriptorTest

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870196#comment-15870196
 ] 

ASF GitHub Bot commented on FLINK-5811:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3326
  
Thanks for your review @StephanEwen. Merging this PR.


> Harden YarnClusterDescriptorTest
> 
>
> Key: FLINK-5811
> URL: https://issues.apache.org/jira/browse/FLINK-5811
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Trivial
>
> The {{YarnClusterDescriptorTest}} is expecting in some test cases an 
> {{Exception}} but does not fail if the exception is not thrown. Moreover it 
> prints the stack trace of the expected exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3326: [FLINK-5811] [tests] Harden YarnClusterDescriptorTest

2017-02-16 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3326
  
Thanks for your review @StephanEwen. Merging this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5773) Cannot cast scala.util.Failure to org.apache.flink.runtime.messages.Acknowledge

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870194#comment-15870194
 ] 

ASF GitHub Bot commented on FLINK-5773:
---

Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/3324


> Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
> ---
>
> Key: FLINK-5773
> URL: https://issues.apache.org/jira/browse/FLINK-5773
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Colin Breame
>Assignee: Till Rohrmann
> Fix For: 1.3.0, 1.2.1
>
>
> The exception below happens when I set the 
> StreamExecutionEnvironment.setMaxParallelism() to anything less than 4.
> Let me know if you need more information.
> {code}
> Caused by: java.lang.ClassCastException: Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
>   at java.lang.Class.cast(Class.java:3369)
>   at scala.concurrent.Future$$anonfun$mapTo$1.apply(Future.scala:405)
>   at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
>   at scala.util.Try$.apply(Try.scala:161)
>   at scala.util.Success.map(Try.scala:206)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.submitTask(TaskManager.scala:1206)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:458)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:280)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink pull request #3324: [backport] [FLINK-5773] Use akka.actor.Status.Fail...

2017-02-16 Thread tillrohrmann
Github user tillrohrmann closed the pull request at:

https://github.com/apache/flink/pull/3324


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #3321: [FLINK-5773] Use akka.actor.Status.Failure class t...

2017-02-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3321


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5773) Cannot cast scala.util.Failure to org.apache.flink.runtime.messages.Acknowledge

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870190#comment-15870190
 ] 

ASF GitHub Bot commented on FLINK-5773:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3324
  
Thanks for your review @StephanEwen. Travis has passed. Merging this PR.


> Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
> ---
>
> Key: FLINK-5773
> URL: https://issues.apache.org/jira/browse/FLINK-5773
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Colin Breame
>Assignee: Till Rohrmann
> Fix For: 1.3.0, 1.2.1
>
>
> The exception below happens when I set the 
> StreamExecutionEnvironment.setMaxParallelism() to anything less than 4.
> Let me know if you need more information.
> {code}
> Caused by: java.lang.ClassCastException: Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
>   at java.lang.Class.cast(Class.java:3369)
>   at scala.concurrent.Future$$anonfun$mapTo$1.apply(Future.scala:405)
>   at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
>   at scala.util.Try$.apply(Try.scala:161)
>   at scala.util.Success.map(Try.scala:206)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.submitTask(TaskManager.scala:1206)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:458)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:280)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5773) Cannot cast scala.util.Failure to org.apache.flink.runtime.messages.Acknowledge

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870193#comment-15870193
 ] 

ASF GitHub Bot commented on FLINK-5773:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/3321


> Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
> ---
>
> Key: FLINK-5773
> URL: https://issues.apache.org/jira/browse/FLINK-5773
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Colin Breame
>Assignee: Till Rohrmann
> Fix For: 1.3.0, 1.2.1
>
>
> The exception below happens when I set the 
> StreamExecutionEnvironment.setMaxParallelism() to anything less than 4.
> Let me know if you need more information.
> {code}
> Caused by: java.lang.ClassCastException: Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
>   at java.lang.Class.cast(Class.java:3369)
>   at scala.concurrent.Future$$anonfun$mapTo$1.apply(Future.scala:405)
>   at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
>   at scala.util.Try$.apply(Try.scala:161)
>   at scala.util.Success.map(Try.scala:206)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.submitTask(TaskManager.scala:1206)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:458)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:280)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3324: [backport] [FLINK-5773] Use akka.actor.Status.Failure cla...

2017-02-16 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3324
  
Thanks for your review @StephanEwen. Travis has passed. Merging this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5773) Cannot cast scala.util.Failure to org.apache.flink.runtime.messages.Acknowledge

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870186#comment-15870186
 ] 

ASF GitHub Bot commented on FLINK-5773:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3321
  
Thanks for the review @StephanEwen. Tests have passed on Travis. Merging 
this PR.


> Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
> ---
>
> Key: FLINK-5773
> URL: https://issues.apache.org/jira/browse/FLINK-5773
> Project: Flink
>  Issue Type: Bug
>  Components: Distributed Coordination
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Colin Breame
>Assignee: Till Rohrmann
> Fix For: 1.3.0, 1.2.1
>
>
> The exception below happens when I set the 
> StreamExecutionEnvironment.setMaxParallelism() to anything less than 4.
> Let me know if you need more information.
> {code}
> Caused by: java.lang.ClassCastException: Cannot cast scala.util.Failure to 
> org.apache.flink.runtime.messages.Acknowledge
>   at java.lang.Class.cast(Class.java:3369)
>   at scala.concurrent.Future$$anonfun$mapTo$1.apply(Future.scala:405)
>   at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
>   at scala.util.Try$.apply(Try.scala:161)
>   at scala.util.Success.map(Try.scala:206)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.processBatch$1(Future.scala:643)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply$mcV$sp(Future.scala:658)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch$$anonfun$run$1.apply(Future.scala:635)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$Batch.run(Future.scala:634)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.scala$concurrent$Future$InternalCallbackExecutor$$unbatchedExecute(Future.scala:694)
>   at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:685)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.submitTask(TaskManager.scala:1206)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.org$apache$flink$runtime$taskmanager$TaskManager$$handleTaskMessage(TaskManager.scala:458)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager$$anonfun$handleMessage$1.applyOrElse(TaskManager.scala:280)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.taskmanager.TaskManager.aroundReceive(TaskManager.scala:122)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3321: [FLINK-5773] Use akka.actor.Status.Failure class to send ...

2017-02-16 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3321
  
Thanks for the review @StephanEwen. Tests have passed on Travis. Merging 
this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4997) Extending Window Function Metadata

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870113#comment-15870113
 ] 

ASF GitHub Bot commented on FLINK-4997:
---

Github user VenturaDelMonte commented on the issue:

https://github.com/apache/flink/pull/3285
  
Sure, I can take care of the all-window case!


> Extending Window Function Metadata
> --
>
> Key: FLINK-4997
> URL: https://issues.apache.org/jira/browse/FLINK-4997
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API, Streaming
>Reporter: Ventura Del Monte
>Assignee: Ventura Del Monte
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-2+Extending+Window+Function+Metadata



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3285: [FLINK-4997] [streaming] Introduce ProcessWindowFunction

2017-02-16 Thread VenturaDelMonte
Github user VenturaDelMonte commented on the issue:

https://github.com/apache/flink/pull/3285
  
Sure, I can take care of the all-window case!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3190: [FLINK-5546][build] java.io.tmpdir setted as project buil...

2017-02-16 Thread shijinkui
Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3190
  
> Here is a related issue: https://issues.apache.org/jira/browse/FLINK-5817

I sounds good. I want to change the default system property 
`java.io.tmpdir` to be in the `target` directory. This can resolve most of such 
temporary directory conflict problem.

I only want to quickly fix such problem which block our production unit 
test, either this PR or FLINK-5817. Now we have to disable such module's unit 
test to avoid such error.

In FLINK-5817, I think we can replace all the manually temporary directory 
creating with `TemporaryFolder`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5546) java.io.tmpdir setted as project build directory in surefire plugin

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870109#comment-15870109
 ] 

ASF GitHub Bot commented on FLINK-5546:
---

Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3190
  
> Here is a related issue: https://issues.apache.org/jira/browse/FLINK-5817

I sounds good. I want to change the default system property 
`java.io.tmpdir` to be in the `target` directory. This can resolve most of such 
temporary directory conflict problem.

I only want to quickly fix such problem which block our production unit 
test, either this PR or FLINK-5817. Now we have to disable such module's unit 
test to avoid such error.

In FLINK-5817, I think we can replace all the manually temporary directory 
creating with `TemporaryFolder`.


> java.io.tmpdir setted as project build directory in surefire plugin
> ---
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>  Components: Build System
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5640) configure the explicit Unit Test file suffix

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870073#comment-15870073
 ] 

ASF GitHub Bot commented on FLINK-5640:
---

Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3211
  
@StephanEwen Thank for your quickly reviewing.
>`**/*Test.*`
This can clearly show what unit test's file name like, just like 
`integration-tests` does. If omitting the `include`, it means that all the 
class file except `**/*ITCase.*` should be unit test, not only `*Test.*`.


> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3211: [FLINK-5640][test]configure the explicit Unit Test file s...

2017-02-16 Thread shijinkui
Github user shijinkui commented on the issue:

https://github.com/apache/flink/pull/3211
  
@StephanEwen Thank for your quickly reviewing.
>`**/*Test.*`
This can clearly show what unit test's file name like, just like 
`integration-tests` does. If omitting the `include`, it means that all the 
class file except `**/*ITCase.*` should be unit test, not only `*Test.*`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5819) Improve metrics reporting

2017-02-16 Thread Greg Hogan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15870011#comment-15870011
 ] 

Greg Hogan commented on FLINK-5819:
---

Some charts were added in FLINK-2730 but had to be removed due to the 
dependency on an incompatibly licensed library.

We should consider whether charts are best left to external software using 
Flink's metrics reporters.

> Improve metrics reporting
> -
>
> Key: FLINK-5819
> URL: https://issues.apache.org/jira/browse/FLINK-5819
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.3.0
>Reporter: Paul Nelligan
>  Labels: web-ui
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> When displaying individual metrics for a vertex / node of a job in the webui, 
> it is desirable to add an option to display metrics as a numeric or as a 
> chart.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5588) Add a unit scaler based on different norms

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1587#comment-1587
 ] 

ASF GitHub Bot commented on FLINK-5588:
---

Github user skonto commented on the issue:

https://github.com/apache/flink/pull/3313
  
Ok sorry for that I did squash the commits, also I am used to it from other 
projects where the comments are invalidated.


> Add a unit scaler based on different norms
> --
>
> Key: FLINK-5588
> URL: https://issues.apache.org/jira/browse/FLINK-5588
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>Priority: Minor
>
> So far ML has two scalers: min-max and the standard scaler.
> A third one frequently used, is the scaler to unit.
> We could implement a transformer for this type of scaling for different norms 
> available to the user.
> I will make a separate class for the Normalization per sample procedure by 
> using the Transformer API because it is easy to add
> it, fit method does nothing in this case.
> Scikit-learn has also some calls available outside the Transform API, we 
> might want add that in the future.
> These calls work on any axis but they are not re-usable in a pipeline [4]
> Right now the existing scalers in Flink ML support per feature normalization 
> by using the Transformer API. 
> Resources
> [1] https://en.wikipedia.org/wiki/Feature_scaling
> [2] 
> http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html
> [3] https://spark.apache.org/docs/2.1.0/mllib-feature-extraction.html
> [4] http://scikit-learn.org/stable/modules/preprocessing.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3313: [FLINK-5588][ml] add a data normalizer to ml library

2017-02-16 Thread skonto
Github user skonto commented on the issue:

https://github.com/apache/flink/pull/3313
  
Ok sorry for that I did squash the commits, also I am used to it from other 
projects where the comments are invalidated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5824) Fix String/byte conversions without explicit encoding

2017-02-16 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869989#comment-15869989
 ] 

Stephan Ewen commented on FLINK-5824:
-

We should add a checkstyle rule that forbids {{String.toBytes()}} and {{new 
String(byte[])}}.

> Fix String/byte conversions without explicit encoding
> -
>
> Key: FLINK-5824
> URL: https://issues.apache.org/jira/browse/FLINK-5824
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Queryable State, State Backends, 
> Checkpointing, Webfrontend
>Reporter: Ufuk Celebi
>Priority: Blocker
>
> In a couple of places we convert Strings to bytes and bytes back to Strings 
> without explicitly specifying an encoding. This can lead to problems when 
> client and server default encodings differ.
> The task of this JIRA is to go over the whole project and look for 
> conversions where we don't specify an encoding and fix it to specify UTF-8 
> explicitly.
> For starters, we can {{grep -R 'getBytes()' .}}, which already reveals many 
> problematic places.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5818) change checkpoint dir permission to 700 for security reason

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869993#comment-15869993
 ] 

ASF GitHub Bot commented on FLINK-5818:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3335
  
@WangTaoTheTonic HDFS supports posix ACLs 
([link](http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_Access_Control_Lists)).
 These are per-file/directory and inherited. Does this meet your requirements?


> change checkpoint dir permission to 700 for security reason
> ---
>
> Key: FLINK-5818
> URL: https://issues.apache.org/jira/browse/FLINK-5818
> Project: Flink
>  Issue Type: Improvement
>  Components: Security, State Backends, Checkpointing
>Reporter: Tao Wang
>
> Now checkpoint directory is made w/o specified permission, so it is easy for 
> another user to delete or read files under it, which will cause restore 
> failure or information leak.
> It's better to lower it down to 700.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3335: [FLINK-5818][Security]change checkpoint dir permission to...

2017-02-16 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3335
  
@WangTaoTheTonic HDFS supports posix ACLs 
([link](http://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_Access_Control_Lists)).
 These are per-file/directory and inherited. Does this meet your requirements?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3313: [FLINK-5588][ml] add a data normalizer to ml library

2017-02-16 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3313
  
But please don't do a force push on a branch which has been opened as a PR. 
If there was a review ongoing, then the force push might invalidate it (e.g. if 
you changed something in the relevant files).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5588) Add a unit scaler based on different norms

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869984#comment-15869984
 ] 

ASF GitHub Bot commented on FLINK-5588:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3313
  
But please don't do a force push on a branch which has been opened as a PR. 
If there was a review ongoing, then the force push might invalidate it (e.g. if 
you changed something in the relevant files).


> Add a unit scaler based on different norms
> --
>
> Key: FLINK-5588
> URL: https://issues.apache.org/jira/browse/FLINK-5588
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>Priority: Minor
>
> So far ML has two scalers: min-max and the standard scaler.
> A third one frequently used, is the scaler to unit.
> We could implement a transformer for this type of scaling for different norms 
> available to the user.
> I will make a separate class for the Normalization per sample procedure by 
> using the Transformer API because it is easy to add
> it, fit method does nothing in this case.
> Scikit-learn has also some calls available outside the Transform API, we 
> might want add that in the future.
> These calls work on any axis but they are not re-usable in a pipeline [4]
> Right now the existing scalers in Flink ML support per feature normalization 
> by using the Transformer API. 
> Resources
> [1] https://en.wikipedia.org/wiki/Feature_scaling
> [2] 
> http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html
> [3] https://spark.apache.org/docs/2.1.0/mllib-feature-extraction.html
> [4] http://scikit-learn.org/stable/modules/preprocessing.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5624) Support tumbling window on streaming tables in the SQL API

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869977#comment-15869977
 ] 

ASF GitHub Bot commented on FLINK-5624:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3252
  
Thanks for the update @haohui.
PR is good to merge.


> Support tumbling window on streaming tables in the SQL API
> --
>
> Key: FLINK-5624
> URL: https://issues.apache.org/jira/browse/FLINK-5624
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>
> This is a follow up of FLINK-4691.
> FLINK-4691 adds supports for group-windows for streaming tables. This jira 
> proposes to expose the functionality in the SQL layer via the {{GROUP BY}} 
> clauses, as described in 
> http://calcite.apache.org/docs/stream.html#tumbling-windows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3252: [FLINK-5624] Support tumbling window on streaming tables ...

2017-02-16 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3252
  
Thanks for the update @haohui.
PR is good to merge.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5588) Add a unit scaler based on different norms

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869974#comment-15869974
 ] 

ASF GitHub Bot commented on FLINK-5588:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3313
  
@skonto another option, if the master branch has a newer commit, is to 
rebase and force push.


> Add a unit scaler based on different norms
> --
>
> Key: FLINK-5588
> URL: https://issues.apache.org/jira/browse/FLINK-5588
> Project: Flink
>  Issue Type: New Feature
>  Components: Machine Learning Library
>Reporter: Stavros Kontopoulos
>Assignee: Stavros Kontopoulos
>Priority: Minor
>
> So far ML has two scalers: min-max and the standard scaler.
> A third one frequently used, is the scaler to unit.
> We could implement a transformer for this type of scaling for different norms 
> available to the user.
> I will make a separate class for the Normalization per sample procedure by 
> using the Transformer API because it is easy to add
> it, fit method does nothing in this case.
> Scikit-learn has also some calls available outside the Transform API, we 
> might want add that in the future.
> These calls work on any axis but they are not re-usable in a pipeline [4]
> Right now the existing scalers in Flink ML support per feature normalization 
> by using the Transformer API. 
> Resources
> [1] https://en.wikipedia.org/wiki/Feature_scaling
> [2] 
> http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html
> [3] https://spark.apache.org/docs/2.1.0/mllib-feature-extraction.html
> [4] http://scikit-learn.org/stable/modules/preprocessing.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3327: [FLINK-5616] [tests] Harden YarnIntraNonHaMasterServicesT...

2017-02-16 Thread tillrohrmann
Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3327
  
True, this does not make much sense, what I've done here. Somehow I was 
under the assumption that `verify` will block until the call happened when I 
wrote it. Will correct it. Thanks for the review :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3313: [FLINK-5588][ml] add a data normalizer to ml library

2017-02-16 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/3313
  
@skonto another option, if the master branch has a newer commit, is to 
rebase and force push.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5616) YarnPreConfiguredMasterHaServicesTest fails sometimes

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869973#comment-15869973
 ] 

ASF GitHub Bot commented on FLINK-5616:
---

Github user tillrohrmann commented on the issue:

https://github.com/apache/flink/pull/3327
  
True, this does not make much sense, what I've done here. Somehow I was 
under the assumption that `verify` will block until the call happened when I 
wrote it. Will correct it. Thanks for the review :-)


> YarnPreConfiguredMasterHaServicesTest fails sometimes
> -
>
> Key: FLINK-5616
> URL: https://issues.apache.org/jira/browse/FLINK-5616
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Aljoscha Krettek
>Assignee: Till Rohrmann
>  Labels: test-stability
>
> This is the relevant part from the log:
> {code}
> ---
>  T E S T S
> ---
> Running 
> org.apache.flink.yarn.highavailability.YarnPreConfiguredMasterHaServicesTest
> Formatting using clusterid: testClusterID
> Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.407 sec - 
> in 
> org.apache.flink.yarn.highavailability.YarnPreConfiguredMasterHaServicesTest
> Running 
> org.apache.flink.yarn.highavailability.YarnIntraNonHaMasterServicesTest
> Formatting using clusterid: testClusterID
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 3.479 sec <<< 
> FAILURE! - in 
> org.apache.flink.yarn.highavailability.YarnIntraNonHaMasterServicesTest
> testClosingReportsToLeader(org.apache.flink.yarn.highavailability.YarnIntraNonHaMasterServicesTest)
>   Time elapsed: 0.836 sec  <<< FAILURE!
> org.mockito.exceptions.verification.WantedButNotInvoked: 
> Wanted but not invoked:
> leaderContender.handleError();
> -> at 
> org.apache.flink.yarn.highavailability.YarnIntraNonHaMasterServicesTest.testClosingReportsToLeader(YarnIntraNonHaMasterServicesTest.java:120)
> Actually, there were zero interactions with this mock.
>   at 
> org.apache.flink.yarn.highavailability.YarnIntraNonHaMasterServicesTest.testClosingReportsToLeader(YarnIntraNonHaMasterServicesTest.java:120)
> Running org.apache.flink.yarn.YarnFlinkResourceManagerTest
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.82 sec - in 
> org.apache.flink.yarn.YarnFlinkResourceManagerTest
> Running org.apache.flink.yarn.YarnClusterDescriptorTest
> java.lang.RuntimeException: Couldn't deploy Yarn cluster
>   at 
> org.apache.flink.yarn.AbstractYarnClusterDescriptor.deploy(AbstractYarnClusterDescriptor.java:425)
>   at 
> org.apache.flink.yarn.YarnClusterDescriptorTest.testConfigOverwrite(YarnClusterDescriptorTest.java:90)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>   at 
> 

[jira] [Commented] (FLINK-5818) change checkpoint dir permission to 700 for security reason

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869944#comment-15869944
 ] 

ASF GitHub Bot commented on FLINK-5818:
---

Github user WangTaoTheTonic commented on the issue:

https://github.com/apache/flink/pull/3335
  
Hi Stephan,

You may have a little misunderstanding about this change. It only controls 
directories with job id (generated using UUID), but not the configured root 
checkpoint directory.  I agree with you that the root directory should be 
created or changed permission when setup, but setup would not be aware of these 
directories with job ids, which are created in runtime.

About Hadoop dependency, I admit I am using a convenient (let's say a hack 
way) to do the transition, as it need a bit more codes to do it standalone. I 
will change it if it's a problem :)


> change checkpoint dir permission to 700 for security reason
> ---
>
> Key: FLINK-5818
> URL: https://issues.apache.org/jira/browse/FLINK-5818
> Project: Flink
>  Issue Type: Improvement
>  Components: Security, State Backends, Checkpointing
>Reporter: Tao Wang
>
> Now checkpoint directory is made w/o specified permission, so it is easy for 
> another user to delete or read files under it, which will cause restore 
> failure or information leak.
> It's better to lower it down to 700.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3335: [FLINK-5818][Security]change checkpoint dir permission to...

2017-02-16 Thread WangTaoTheTonic
Github user WangTaoTheTonic commented on the issue:

https://github.com/apache/flink/pull/3335
  
Hi Stephan,

You may have a little misunderstanding about this change. It only controls 
directories with job id (generated using UUID), but not the configured root 
checkpoint directory.  I agree with you that the root directory should be 
created or changed permission when setup, but setup would not be aware of these 
directories with job ids, which are created in runtime.

About Hadoop dependency, I admit I am using a convenient (let's say a hack 
way) to do the transition, as it need a bit more codes to do it standalone. I 
will change it if it's a problem :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5817) Fix test concurrent execution failure by test dir conflicts.

2017-02-16 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869939#comment-15869939
 ] 

Stephan Ewen commented on FLINK-5817:
-

There is also a pull request that tries to change the tmp directory for tests: 
https://github.com/apache/flink/pull/3190


> Fix test concurrent execution failure by test dir conflicts.
> 
>
> Key: FLINK-5817
> URL: https://issues.apache.org/jira/browse/FLINK-5817
> Project: Flink
>  Issue Type: Bug
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>
> Currently when different users build flink on the same machine, failure may 
> happen because some test utilities create test file using the fixed name, 
> which will cause file access failing when different user processing the same 
> file at the same time.
> We have found errors from AbstractTestBase, IOManagerTest, FileCacheTest.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5546) java.io.tmpdir setted as project build directory in surefire plugin

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869936#comment-15869936
 ] 

ASF GitHub Bot commented on FLINK-5546:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3190
  
Here is a related issue: https://issues.apache.org/jira/browse/FLINK-5817


> java.io.tmpdir setted as project build directory in surefire plugin
> ---
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>  Components: Build System
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3190: [FLINK-5546][build] java.io.tmpdir setted as project buil...

2017-02-16 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3190
  
Here is a related issue: https://issues.apache.org/jira/browse/FLINK-5817


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (FLINK-5818) change checkpoint dir permission to 700 for security reason

2017-02-16 Thread Tao Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Wang updated FLINK-5818:


Hi Stephan,


You may have a little misunderstanding about this change. It only controls 
directories with job id (generated using UUID), but not the configured root 
checkpoint directory.  I agree with you that the root directory should be 
created or changed permission when setup, but setup would not be aware of these 
directories with job ids, which are created in runtime.


About Hadoop dependency, I admit I am using a convenient (let's say a hack way) 
to do the transition, as it need a bit more codes to do it standalone. I will 
change it if it's a problem :)


-
Regards.
On 02/16/2017 21:17, ASF GitHub Bot (JIRA) wrote:

   [ 
https://issues.apache.org/jira/browse/FLINK-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869904#comment-15869904
 ]

ASF GitHub Bot commented on FLINK-5818:
---

Github user StephanEwen commented on the issue:

   https://github.com/apache/flink/pull/3335
 
   Thank you for the contribution. I see the idea behind the fix.
   
   I am unsure whether we should let Flink manage the permissions of these 
directories. My gut feeling is that this is something that the operational 
setup should be taking care of. For example some setup may want to keep 
whatever is the pre-defined permission, to make checkpoints accessible by other 
groups for the purpose of cloning/migrating jobs to other clusters.
   
   That is something probably worth of a mailing list discussion.
   I would put this on hold until we have a decision there.
   
   If we want this change, we still need to do it a bit different, as we are 
trying to make Flink work without any dependencies to Hadoop (Hadoop is still 
supported perfectly, but is an optional dependencies).
   Adding new hard Hadoop dependencies (like here) is not possible due to that.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


> change checkpoint dir permission to 700 for security reason
> ---
>
> Key: FLINK-5818
> URL: https://issues.apache.org/jira/browse/FLINK-5818
> Project: Flink
>  Issue Type: Improvement
>  Components: Security, State Backends, Checkpointing
>Reporter: Tao Wang
>
> Now checkpoint directory is made w/o specified permission, so it is easy for 
> another user to delete or read files under it, which will cause restore 
> failure or information leak.
> It's better to lower it down to 700.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5824) Fix String/byte conversions without explicit encoding

2017-02-16 Thread Stephan Ewen (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephan Ewen updated FLINK-5824:

Priority: Blocker  (was: Major)

> Fix String/byte conversions without explicit encoding
> -
>
> Key: FLINK-5824
> URL: https://issues.apache.org/jira/browse/FLINK-5824
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Queryable State, State Backends, 
> Checkpointing, Webfrontend
>Reporter: Ufuk Celebi
>Priority: Blocker
>
> In a couple of places we convert Strings to bytes and bytes back to Strings 
> without explicitly specifying an encoding. This can lead to problems when 
> client and server default encodings differ.
> The task of this JIRA is to go over the whole project and look for 
> conversions where we don't specify an encoding and fix it to specify UTF-8 
> explicitly.
> For starters, we can {{grep -R 'getBytes()' .}}, which already reveals many 
> problematic places.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3211: [FLINK-5640][test]onfigure the explicit Unit Test file su...

2017-02-16 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3211
  
Can you briefly explain what this fixes? Currently, it behaves exactly like 
you describe: The "test" phase executes the `*Test.java` and `*Test.scala` 
classes, and the "verify" phase executes the `*ITCase.java` and `*ITCase.scala` 
classes.

Also, the changes to the compiler plugin - what is the purpose and effect 
of setting it to "verbose"?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5640) configure the explicit Unit Test file suffix

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869928#comment-15869928
 ] 

ASF GitHub Bot commented on FLINK-5640:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3211
  
Can you briefly explain what this fixes? Currently, it behaves exactly like 
you describe: The "test" phase executes the `*Test.java` and `*Test.scala` 
classes, and the "verify" phase executes the `*ITCase.java` and `*ITCase.scala` 
classes.

Also, the changes to the compiler plugin - what is the purpose and effect 
of setting it to "verbose"?


> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5824) Fix String/byte conversions without explicit encoding

2017-02-16 Thread Ufuk Celebi (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ufuk Celebi updated FLINK-5824:
---
Summary: Fix String/byte conversions without explicit encoding  (was: Fix 
String/byte conversion without explicit encodings)

> Fix String/byte conversions without explicit encoding
> -
>
> Key: FLINK-5824
> URL: https://issues.apache.org/jira/browse/FLINK-5824
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Queryable State, State Backends, 
> Checkpointing, Webfrontend
>Reporter: Ufuk Celebi
>
> In a couple of places we convert Strings to bytes and bytes back to Strings 
> without explicitly specifying an encoding. This can lead to problems when 
> client and server default encodings differ.
> The task of this JIRA is to go over the whole project and look for 
> conversions where we don't specify an encoding and fix it to specify UTF-8 
> explicitly.
> For starters, we can {{grep -R 'getBytes()' .}}, which already reveals many 
> problematic places.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5705) webmonitor's request/response use UTF-8 explicitly

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869923#comment-15869923
 ] 

ASF GitHub Bot commented on FLINK-5705:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3257
  
The fix looks good, thanks!
Merging this...


> webmonitor's request/response use UTF-8 explicitly
> --
>
> Key: FLINK-5705
> URL: https://issues.apache.org/jira/browse/FLINK-5705
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> QueryStringDecoder and HttpPostRequestDecoder use UTF-8 defined in flink.
> Response set content-encoding header with utf-8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3257: [FLINK-5705] [WebMonitor] request/response use UTF-8 expl...

2017-02-16 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3257
  
The fix looks good, thanks!
Merging this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-5824) Fix String/byte conversion without explicit encodings

2017-02-16 Thread Ufuk Celebi (JIRA)
Ufuk Celebi created FLINK-5824:
--

 Summary: Fix String/byte conversion without explicit encodings
 Key: FLINK-5824
 URL: https://issues.apache.org/jira/browse/FLINK-5824
 Project: Flink
  Issue Type: Bug
  Components: Python API, Queryable State, State Backends, 
Checkpointing, Webfrontend
Reporter: Ufuk Celebi


In a couple of places we convert Strings to bytes and bytes back to Strings 
without explicitly specifying an encoding. This can lead to problems when 
client and server default encodings differ.

The task of this JIRA is to go over the whole project and look for conversions 
where we don't specify an encoding and fix it to specify UTF-8 explicitly.

For starters, we can {{grep -R 'getBytes()' .}}, which already reveals many 
problematic places.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-4813) Having flink-test-utils as a dependency outside Flink fails the build

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869907#comment-15869907
 ] 

ASF GitHub Bot commented on FLINK-4813:
---

Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3322
  
Very good, I had stumbled over that problem as well.

Could you also remove the bundle plugin from the root pom (this should 
speed up dependency resolutions in IDEs by a lot) and instead add the above 
described dependency to the test projects that use the `SecureTestEnvironment`?


> Having flink-test-utils as a dependency outside Flink fails the build
> -
>
> Key: FLINK-4813
> URL: https://issues.apache.org/jira/browse/FLINK-4813
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.2.0
>Reporter: Robert Metzger
>Assignee: Nico Kruber
>
> The {{flink-test-utils}} depend on {{hadoop-minikdc}}, which has a 
> dependency, which is only resolvable, if the {{maven-bundle-plugin}} is 
> loaded.
> This is the error message
> {code}
> [ERROR] Failed to execute goal on project quickstart-1.2-tests: Could not 
> resolve dependencies for project 
> com.dataartisans:quickstart-1.2-tests:jar:1.0-SNAPSHOT: Failure to find 
> org.apache.directory.jdbm:apacheds-jdbm1:bundle:2.0.0-M2 in 
> https://repo.maven.apache.org/maven2 was cached in the local repository, 
> resolution will not be reattempted until the update interval of central has 
> elapsed or updates are forced -> [Help 1]
> {code}
> {{flink-parent}} loads that plugin, so all "internal" dependencies to the 
> test utils can resolve the plugin.
> Right now, users have to use the maven bundle plugin to use our test utils 
> externally.
> By making the hadoop minikdc dependency optional, we can probably resolve the 
> issues. Then, only users who want to use the security-related tools in the 
> test utils need to manually add the hadoop minikdc dependency + the plugin.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] flink issue #3322: [FLINK-4813][flink-test-utils] make the hadoop-minikdc de...

2017-02-16 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3322
  
Very good, I had stumbled over that problem as well.

Could you also remove the bundle plugin from the root pom (this should 
speed up dependency resolutions in IDEs by a lot) and instead add the above 
described dependency to the test projects that use the `SecureTestEnvironment`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3335: [FLINK-5818][Security]change checkpoint dir permission to...

2017-02-16 Thread StephanEwen
Github user StephanEwen commented on the issue:

https://github.com/apache/flink/pull/3335
  
Thank you for the contribution. I see the idea behind the fix.

I am unsure whether we should let Flink manage the permissions of these 
directories. My gut feeling is that this is something that the operational 
setup should be taking care of. For example some setup may want to keep 
whatever is the pre-defined permission, to make checkpoints accessible by other 
groups for the purpose of cloning/migrating jobs to other clusters.

That is something probably worth of a mailing list discussion.
I would put this on hold until we have a decision there.

If we want this change, we still need to do it a bit different, as we are 
trying to make Flink work without any dependencies to Hadoop (Hadoop is still 
supported perfectly, but is an optional dependencies).
Adding new hard Hadoop dependencies (like here) is not possible due to that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5710) Add ProcTime() function to indicate StreamSQL

2017-02-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15869902#comment-15869902
 ] 

ASF GitHub Bot commented on FLINK-5710:
---

Github user huawei-flink commented on the issue:

https://github.com/apache/flink/pull/3302
  
@fhueske I've addressed most of the points, however there is a thing that 
is not clear to me yet. So far, the procTime() function generates a timestamp. 
My understanding is that this is not correct, and it should be something else. 
could it be a default timestamp (e.g. epoch)? the actual timestamp normalized 
to the second? what is the best option in your opinion?


> Add ProcTime() function to indicate StreamSQL
> -
>
> Key: FLINK-5710
> URL: https://issues.apache.org/jira/browse/FLINK-5710
> Project: Flink
>  Issue Type: New Feature
>  Components: DataStream API
>Reporter: Stefano Bortoli
>Assignee: Stefano Bortoli
>Priority: Minor
>
> procTime() is a parameterless scalar function that just indicates processing 
> time mode



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >