[jira] [Created] (FLINK-6387) Flink UI support access log

2017-04-26 Thread shijinkui (JIRA)
shijinkui created FLINK-6387:


 Summary: Flink UI support access log
 Key: FLINK-6387
 URL: https://issues.apache.org/jira/browse/FLINK-6387
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Reporter: shijinkui
Assignee: shijinkui


Record the use request to the access log. Append use access to the log file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6367) support custom header settings of allow origin

2017-04-25 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-6367:


Assignee: shijinkui

> support custom header settings of allow origin
> --
>
> Key: FLINK-6367
> URL: https://issues.apache.org/jira/browse/FLINK-6367
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Reporter: shijinkui
>Assignee: shijinkui
>
> `jobmanager.web.access-control-allow-origin`: Enable custom access control 
> parameter for allow origin header, default is `*`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6367) support custom header settings of allow origin

2017-04-25 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982764#comment-15982764
 ] 

shijinkui commented on FLINK-6367:
--

[~greghogan] We need configure some special allow_origin. For example, flink 
set the yarn url as flink allow origin so that forbidden the other urls.

> support custom header settings of allow origin
> --
>
> Key: FLINK-6367
> URL: https://issues.apache.org/jira/browse/FLINK-6367
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
>Reporter: shijinkui
>
> `jobmanager.web.access-control-allow-origin`: Enable custom access control 
> parameter for allow origin header, default is `*`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6367) support custom header settings of allow origin

2017-04-24 Thread shijinkui (JIRA)
shijinkui created FLINK-6367:


 Summary: support custom header settings of allow origin
 Key: FLINK-6367
 URL: https://issues.apache.org/jira/browse/FLINK-6367
 Project: Flink
  Issue Type: Sub-task
  Components: Webfrontend
Reporter: shijinkui


`jobmanager.web.access-control-allow-origin`: Enable custom access control 
parameter for allow origin header, default is `*`.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-6193) Flink dist directory normalize

2017-04-21 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-6193.

Resolution: Fixed

> Flink dist directory normalize
> --
>
> Key: FLINK-6193
> URL: https://issues.apache.org/jira/browse/FLINK-6193
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shijinkui
>
> The Flink distribution's directory have no very clear responsibility about 
> what type of files should be in which directory. For example, "opt" 
> directories are mixed with library jars and example jars.
> This mail here: 
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-dist-directory-management-td16784.html
> After discuss, we determine for the distribution directory style below:
> - "examples" directory only contain example jars
> - "opt" directory only contain optional library jars in runtime
> - "lib" directory only contain library jar that must be loaded at runtime
> - "resources" directory only contain resource file used at runtime, such as 
> web file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5902) Some images can not show in IE

2017-04-05 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15958261#comment-15958261
 ] 

shijinkui commented on FLINK-5902:
--

hi, [~ajithshetty]  do we have some solution?

> Some images can not show in IE
> --
>
> Key: FLINK-5902
> URL: https://issues.apache.org/jira/browse/FLINK-5902
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
> Environment: IE
>Reporter: Tao Wang
> Attachments: chrome is ok.png, IE 11 with problem.png
>
>
> Some images in the Overview page can not show in IE, as it is good in Chrome.
> I'm using IE 11, but think same with IE9 I'll paste the screenshot 
> later.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-6233) Support rowtime inner equi-join between two streams in the SQL API

2017-03-31 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15950672#comment-15950672
 ] 

shijinkui commented on FLINK-6233:
--

Is this the sub-issue of FLINK-4557?

> Support rowtime inner equi-join between two streams in the SQL API
> --
>
> Key: FLINK-6233
> URL: https://issues.apache.org/jira/browse/FLINK-6233
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: hongyuhong
>Assignee: hongyuhong
>
> The goal of this issue is to add support for inner equi-join on proc time 
> streams to the SQL interface.
> Queries similar to the following should be supported:
> {code}
> SELECT o.rowtime , o.productId, o.orderId, s.rowtime AS shipTime 
> FROM Orders AS o 
> JOIN Shipments AS s 
> ON o.orderId = s.orderId 
> AND o.rowtime BETWEEN s.rowtime AND s.rowtime + INTERVAL '1' HOUR;
> {code}
> The following restrictions should initially apply:
> * The join hint only support inner join
> * The ON clause should include equi-join condition
> * The time-condition {{o.rowtime BETWEEN s.rowtime AND s.rowtime + INTERVAL 
> '1' HOUR}} only can use rowtime that is a system attribute, the time 
> condition only support bounded time range like {{o.rowtime BETWEEN s.rowtime 
> - INTERVAL '1' HOUR AND s.rowtime + INTERVAL '1' HOUR}}, not support 
> unbounded like {{o.rowtime > s.rowtime }},  and  should include both two 
> stream's rowtime attribute, {{o.rowtime between rowtime () and rowtime () + 
> 1}} should also not be supported.
> An row-time streams join will not be able to handle late data, because this 
> would mean in insert a row into a sorted order shift all other computations. 
> This would be too expensive to maintain. Therefore, we will throw an error if 
> a user tries to use an row-time stream join with late data handling.
> This issue includes:
> * Design of the DataStream operator to deal with stream join
> * Translation from Calcite's RelNode representation (LogicalJoin). 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5860) Replace all the file creating from java.io.tmpdir with TemporaryFolder

2017-03-29 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15948410#comment-15948410
 ] 

shijinkui commented on FLINK-5860:
--

[~yaroslav.mykhaylov] Thank for your work. Wait for your message.

> Replace all the file creating from java.io.tmpdir with TemporaryFolder
> --
>
> Key: FLINK-5860
> URL: https://issues.apache.org/jira/browse/FLINK-5860
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: Yaroslav Mykhaylov
>  Labels: starter
>
> Search `System.getProperty("java.io.tmpdir")` in whole Flink project. It will 
> get a  Unit test list. Replace all the file creating from `java.io.tmpdir` 
> with TemporaryFolder.
> Who can fix this problem thoroughly?
> ```
> $ grep -ri 'System.getProperty("java.io.tmpdir")' .
> ./flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/example/CassandraTupleWriteAheadSinkExample.java:
>   env.setStateBackend(new FsStateBackend("file:///" + 
> System.getProperty("java.io.tmpdir") + "/flink/backend"));
> ./flink-connectors/flink-connector-kafka-0.10/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-connectors/flink-connector-kafka-0.8/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
>  File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-connectors/flink-connector-kafka-0.9/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
>  File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-contrib/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackendConfigTest.java:
>  return getMockEnvironment(new File[] { new 
> File(System.getProperty("java.io.tmpdir")) });
> ./flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java:
>public static final String DEFAULT_TASK_MANAGER_TMP_PATH = 
> System.getProperty("java.io.tmpdir");
> ./flink-core/src/test/java/org/apache/flink/api/common/io/EnumerateNestedFilesTest.java:
>   final String tempPath = System.getProperty("java.io.tmpdir");
> ./flink-core/src/test/java/org/apache/flink/testutils/TestConfigUtils.java:   
> final File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java: 
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java: 
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/util/KMeansDataGenerator.java:
>   final String outDir = params.get("output", 
> System.getProperty("java.io.tmpdir"));
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/ml/util/LinearRegressionDataGenerator.java:
> final String tmpDir = System.getProperty("java.io.tmpdir");
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java:
>   final String outPath = System.getProperty("java.io.tmpdir");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>public static final String FLINK_PYTHON_FILE_PATH = 
> System.getProperty("java.io.tmpdir") + File.separator + "flink_plan";
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>public static final String FLINK_TMP_DATA_DIR = 
> System.getProperty("java.io.tmpdir") + File.separator + "flink_data";
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>

[jira] [Commented] (FLINK-6204) Improve Event-Time OVER ROWS BETWEEN UNBOUNDED PRECEDING aggregation to SQL

2017-03-28 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15946477#comment-15946477
 ] 

shijinkui commented on FLINK-6204:
--

-1.
hi, guys. I want to know  the differance between this PR and 
https://github.com/apache/flink/pull/3386
You have 138 comments, but now rewrite the FLINK-3386. Why not recommend this 
solution at 3386. Do we must waste of time on the same problem?

> Improve Event-Time OVER ROWS BETWEEN UNBOUNDED PRECEDING aggregation to SQL
> ---
>
> Key: FLINK-6204
> URL: https://issues.apache.org/jira/browse/FLINK-6204
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently `event time OVER ROWS BETWEEN UNBOUNDED PRECEDING aggregation to 
> SQL`  implementation  class: ` UnboundedEventTimeOverProcessFunction` use 
> data size uncontrollable memory data structures`sortedTimestamps: 
> util.LinkedList [Long] cache data timestamps and sort timestamps. IMO,It's 
> not a good way, because in the production environment there are millions of 
> window data pre millisecond in our application scenario.So, I want to remove 
> `util.LinkedList [Long] `. Welcome anyone to give me feedback.
> What do you think? [~fhueske] and [~Yuhong_kyo]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6201) move python example files from resources to the examples

2017-03-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-6201:


Assignee: shijinkui

> move python example files from resources to the examples
> 
>
> Key: FLINK-6201
> URL: https://issues.apache.org/jira/browse/FLINK-6201
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples
>Reporter: shijinkui
>Assignee: shijinkui
>Priority: Trivial
>
> Python example in the resource dir is not suitable. Move them to the 
> examples/python dir.
> ```
> 
>   
> ../flink-libraries/flink-python/src/main/python/org/apache/flink/python/api
>   resources/python
>   0755
> 
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6201) move python example files from resources to the examples

2017-03-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-6201:
-
Priority: Trivial  (was: Major)

> move python example files from resources to the examples
> 
>
> Key: FLINK-6201
> URL: https://issues.apache.org/jira/browse/FLINK-6201
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples
>Reporter: shijinkui
>Priority: Trivial
>
> Python example in the resource dir is not suitable. Move them to the 
> examples/python dir.
> ```
> 
>   
> ../flink-libraries/flink-python/src/main/python/org/apache/flink/python/api
>   resources/python
>   0755
> 
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6201) move python example files from resources to the examples

2017-03-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-6201:
-
Summary: move python example files from resources to the examples  (was: 
move python example files to the examples dir)

> move python example files from resources to the examples
> 
>
> Key: FLINK-6201
> URL: https://issues.apache.org/jira/browse/FLINK-6201
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples
>Reporter: shijinkui
>
> Python example in the resource dir is not suitable. Move them to the 
> examples/python dir.
> ```
> 
>   
> ../flink-libraries/flink-python/src/main/python/org/apache/flink/python/api
>   resources/python
>   0755
> 
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6201) move python example files to the examples dir

2017-03-27 Thread shijinkui (JIRA)
shijinkui created FLINK-6201:


 Summary: move python example files to the examples dir
 Key: FLINK-6201
 URL: https://issues.apache.org/jira/browse/FLINK-6201
 Project: Flink
  Issue Type: Sub-task
  Components: Examples
Reporter: shijinkui


Python example in the resource dir is not suitable. Move them to the 
examples/python dir.
```


../flink-libraries/flink-python/src/main/python/org/apache/flink/python/api
resources/python
0755

```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-4319) Rework Cluster Management (FLIP-6)

2017-03-27 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15943198#comment-15943198
 ] 

shijinkui commented on FLINK-4319:
--

Flink on Kubernetes, do we have some schedule?

> Rework Cluster Management (FLIP-6)
> --
>
> Key: FLINK-4319
> URL: https://issues.apache.org/jira/browse/FLINK-4319
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Affects Versions: 1.1.0
>Reporter: Stephan Ewen
>
> This is the root issue to track progress of the rework of cluster management 
> (FLIP-6) 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5217) Deprecated interface Checkpointed make clear suggestion

2017-03-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-5217.

Resolution: Fixed

> Deprecated interface Checkpointed make clear suggestion
> ---
>
> Key: FLINK-5217
> URL: https://issues.apache.org/jira/browse/FLINK-5217
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: shijinkui
> Fix For: 1.2.1
>
>
> package org.apache.flink.streaming.api.checkpoint;
> @Deprecated
> @PublicEvolving
> public interface Checkpointed extends 
> CheckpointedRestoring
> this interface should have clear suggestion which version to give up this 
> interface, and which interface can instead of it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5754) released tag missing .gitigonore .travis.yml .gitattributes

2017-03-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-5754.

Resolution: Won't Fix

> released tag missing .gitigonore  .travis.yml .gitattributes
> 
>
> Key: FLINK-5754
> URL: https://issues.apache.org/jira/browse/FLINK-5754
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: shijinkui
>
> released tag missing .gitigonore  .travis.yml .gitattributes.
> When make a release version, should only replace the version.
> for example: https://github.com/apache/spark/tree/v2.1.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5754) released tag missing .gitigonore .travis.yml .gitattributes

2017-03-27 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15942804#comment-15942804
 ] 

shijinkui commented on FLINK-5754:
--

[~greghogan] It's OK

> released tag missing .gitigonore  .travis.yml .gitattributes
> 
>
> Key: FLINK-5754
> URL: https://issues.apache.org/jira/browse/FLINK-5754
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: shijinkui
>
> released tag missing .gitigonore  .travis.yml .gitattributes.
> When make a release version, should only replace the version.
> for example: https://github.com/apache/spark/tree/v2.1.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-4562) table examples make an divided module in flink-examples

2017-03-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-4562:
-
Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-6193

> table examples make an divided module in flink-examples
> ---
>
> Key: FLINK-4562
> URL: https://issues.apache.org/jira/browse/FLINK-4562
> Project: Flink
>  Issue Type: Sub-task
>  Components: Examples, Table API & SQL
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> example code should't packaged in table module.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6193) Flink dist directory normalize

2017-03-27 Thread shijinkui (JIRA)
shijinkui created FLINK-6193:


 Summary: Flink dist directory normalize
 Key: FLINK-6193
 URL: https://issues.apache.org/jira/browse/FLINK-6193
 Project: Flink
  Issue Type: Improvement
  Components: Examples
Reporter: shijinkui


The Flink distribution's directory have no very clear responsibility about what 
type of files should be in which directory. For example, "opt" directories are 
mixed with library jars and example jars.

This mail here: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-dist-directory-management-td16784.html

After discuss, we determine for the distribution directory style below:
- "examples" directory only contain example jars
- "opt" directory only contain optional library jars in runtime
- "lib" directory only contain library jar that must be loaded at runtime
- "resources" directory only contain resource file used at runtime, such as web 
file



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6117) 'zookeeper.sasl.disable' not takes effet when starting CuratorFramework

2017-03-24 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-6117:
-
Issue Type: Sub-task  (was: Bug)
Parent: FLINK-5839

> 'zookeeper.sasl.disable'  not takes effet when starting CuratorFramework
> 
>
> Key: FLINK-6117
> URL: https://issues.apache.org/jira/browse/FLINK-6117
> Project: Flink
>  Issue Type: Sub-task
>  Components: Client, JobManager
>Affects Versions: 1.2.0
> Environment: Ubuntu, non-secured
>Reporter: CanBin Zheng
>Assignee: CanBin Zheng
>  Labels: security
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> The value of 'zookeeper.sasl.disable' not used in the right way when starting 
> CuratorFramework.
> Here are all the settings relevant to high-availability in my flink-conf.yaml:
>   high-availability: zookeeper
>   high-availability.zookeeper.quorum: localhost:2181
>   high-availability.zookeeper.storageDir: hdfs:///flink/ha/
> Obviously, no explicit value is set for 'zookeeper.sasl.disable' so default 
> value of 'true'(ConfigConstants.DEFAULT_ZOOKEEPER_SASL_DISABLE) would be 
> applied. But when FlinkYarnSessionCli & FlinkApplicationMasterRunner start,
> both logs show that they attempt connecting to zookeeper in 'SASL' mode.
> logs are like this:
> 2017-03-18 23:53:10,498 INFO  org.apache.zookeeper.ZooKeeper  
>   - Initiating client connection, connectString=localhost:2181 
> sessionTimeout=6 
> watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@5949eba8
> 2017-03-18 23:53:10,498 INFO  org.apache.zookeeper.ZooKeeper  
>   - Initiating client connection, connectString=localhost:2181 
> sessionTimeout=6 
> watcher=org.apache.flink.shaded.org.apache.curator.ConnectionState@5949eba8
> 2017-03-18 23:53:10,522 WARN  org.apache.zookeeper.ClientCnxn 
>   - SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/jaas-3047036396963510842.conf'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it.
> 2017-03-18 23:53:10,522 WARN  org.apache.zookeeper.ClientCnxn 
>   - SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/jaas-3047036396963510842.conf'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it.
> 2017-03-18 23:53:10,530 INFO  org.apache.zookeeper.ClientCnxn 
>   - Opening socket connection to server localhost/127.0.0.1:2181
> 2017-03-18 23:53:10,530 INFO  org.apache.zookeeper.ClientCnxn 
>   - Opening socket connection to server localhost/127.0.0.1:2181
> 2017-03-18 23:53:10,534 ERROR 
> org.apache.flink.shaded.org.apache.curator.ConnectionState- 
> Authentication failed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6148) The Zookeeper client occur SASL error when the sasl is disable

2017-03-24 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-6148:


Assignee: (was: shijinkui)

> The Zookeeper client occur SASL error when the sasl is disable
> --
>
> Key: FLINK-6148
> URL: https://issues.apache.org/jira/browse/FLINK-6148
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.0
>Reporter: zhangrucong1982
>
> I use the flink in yarn cluster of version 1.2.0.  The HA is configured in 
> flink-conf.yaml, but the sasl is disabled. The configurations are :
> high-availability: zookeeper
> high-availability.zookeeper.quorum: 
> 100.106.40.102:2181,100.106.57.136:2181,100.106.41.233:2181
> high-availability.zookeeper.storageDir: hdfs:/flink
> high-availability.zookeeper.client.acl: open
> high-availability.zookeeper.path.root:  flink0308
> zookeeper.sasl.disable: true
> The client log、JobManager log、TaskManager log are contain the following error 
> information:
> 2017-03-22 11:18:24,662 WARN  org.apache.zookeeper.ClientCnxn 
>   - SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/jaas-441937039502263015.conf'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it.
> 2017-03-22 11:18:24,663 ERROR 
> org.apache.flink.shaded.org.apache.curator.ConnectionState- 
> Authentication failed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-6148) The Zookeeper client occur SASL error when the sasl is disable

2017-03-24 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-6148:


Assignee: shijinkui

> The Zookeeper client occur SASL error when the sasl is disable
> --
>
> Key: FLINK-6148
> URL: https://issues.apache.org/jira/browse/FLINK-6148
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.2.0
>Reporter: zhangrucong1982
>Assignee: shijinkui
>
> I use the flink in yarn cluster of version 1.2.0.  The HA is configured in 
> flink-conf.yaml, but the sasl is disabled. The configurations are :
> high-availability: zookeeper
> high-availability.zookeeper.quorum: 
> 100.106.40.102:2181,100.106.57.136:2181,100.106.41.233:2181
> high-availability.zookeeper.storageDir: hdfs:/flink
> high-availability.zookeeper.client.acl: open
> high-availability.zookeeper.path.root:  flink0308
> zookeeper.sasl.disable: true
> The client log、JobManager log、TaskManager log are contain the following error 
> information:
> 2017-03-22 11:18:24,662 WARN  org.apache.zookeeper.ClientCnxn 
>   - SASL configuration failed: 
> javax.security.auth.login.LoginException: No JAAS configuration section named 
> 'Client' was found in specified JAAS configuration file: 
> '/tmp/jaas-441937039502263015.conf'. Will continue connection to Zookeeper 
> server without SASL authentication, if Zookeeper server allows it.
> 2017-03-22 11:18:24,663 ERROR 
> org.apache.flink.shaded.org.apache.curator.ConnectionState- 
> Authentication failed



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5217) Deprecated interface Checkpointed make clear suggestion

2017-03-24 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941522#comment-15941522
 ] 

shijinkui commented on FLINK-5217:
--

ping [~srichter]

> Deprecated interface Checkpointed make clear suggestion
> ---
>
> Key: FLINK-5217
> URL: https://issues.apache.org/jira/browse/FLINK-5217
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: shijinkui
> Fix For: 1.2.1
>
>
> package org.apache.flink.streaming.api.checkpoint;
> @Deprecated
> @PublicEvolving
> public interface Checkpointed extends 
> CheckpointedRestoring
> this interface should have clear suggestion which version to give up this 
> interface, and which interface can instead of it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5860) Replace all the file creating from java.io.tmpdir with TemporaryFolder

2017-03-24 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941520#comment-15941520
 ] 

shijinkui commented on FLINK-5860:
--

ping [~yaroslav.mykhaylov] 

> Replace all the file creating from java.io.tmpdir with TemporaryFolder
> --
>
> Key: FLINK-5860
> URL: https://issues.apache.org/jira/browse/FLINK-5860
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: Yaroslav Mykhaylov
>  Labels: starter
>
> Search `System.getProperty("java.io.tmpdir")` in whole Flink project. It will 
> get a  Unit test list. Replace all the file creating from `java.io.tmpdir` 
> with TemporaryFolder.
> Who can fix this problem thoroughly?
> ```
> $ grep -ri 'System.getProperty("java.io.tmpdir")' .
> ./flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/example/CassandraTupleWriteAheadSinkExample.java:
>   env.setStateBackend(new FsStateBackend("file:///" + 
> System.getProperty("java.io.tmpdir") + "/flink/backend"));
> ./flink-connectors/flink-connector-kafka-0.10/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-connectors/flink-connector-kafka-0.8/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
>  File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-connectors/flink-connector-kafka-0.9/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
>  File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-contrib/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackendConfigTest.java:
>  return getMockEnvironment(new File[] { new 
> File(System.getProperty("java.io.tmpdir")) });
> ./flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java:
>public static final String DEFAULT_TASK_MANAGER_TMP_PATH = 
> System.getProperty("java.io.tmpdir");
> ./flink-core/src/test/java/org/apache/flink/api/common/io/EnumerateNestedFilesTest.java:
>   final String tempPath = System.getProperty("java.io.tmpdir");
> ./flink-core/src/test/java/org/apache/flink/testutils/TestConfigUtils.java:   
> final File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java: 
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java: 
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/util/KMeansDataGenerator.java:
>   final String outDir = params.get("output", 
> System.getProperty("java.io.tmpdir"));
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/ml/util/LinearRegressionDataGenerator.java:
> final String tmpDir = System.getProperty("java.io.tmpdir");
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java:
>   final String outPath = System.getProperty("java.io.tmpdir");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>public static final String FLINK_PYTHON_FILE_PATH = 
> System.getProperty("java.io.tmpdir") + File.separator + "flink_plan";
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>public static final String FLINK_TMP_DATA_DIR = 
> System.getProperty("java.io.tmpdir") + File.separator + "flink_data";
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>FLINK_HDFS_PATH = "file:" + 
> 

[jira] [Commented] (FLINK-6060) reference nonexistent class in the scaladoc

2017-03-24 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941519#comment-15941519
 ] 

shijinkui commented on FLINK-6060:
--

[~aljoscha] Sorry for my unclear description.

For example, the class TaskOperationResult in the scaladoc. Actually 
TaskOperationResult is not exist, or it had been changed file name. So in such 
scaladoc, we should correct the referenced class.

  /**
   * Submits a task to the task manager. The result is to this message is a
   * [[TaskOperationResult]] message.
   *
   * @param tasks Descriptor which contains the information to start the task.
   */
  case class SubmitTask(tasks: TaskDeploymentDescriptor)
extends TaskMessage with RequiresLeaderSessionID


> reference nonexistent class in the scaladoc
> ---
>
> Key: FLINK-6060
> URL: https://issues.apache.org/jira/browse/FLINK-6060
> Project: Flink
>  Issue Type: Wish
>  Components: Scala API
>Reporter: shijinkui
>
> TaskMessages.scala
> ConnectedStreams.scala
> DataStream.scala
> Who can fix it?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-6060) reference nonexistent class in the scaladoc

2017-03-24 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-6060:
-
Summary: reference nonexistent class in the scaladoc  (was: not exist class 
referance in the scala function annotation)

> reference nonexistent class in the scaladoc
> ---
>
> Key: FLINK-6060
> URL: https://issues.apache.org/jira/browse/FLINK-6060
> Project: Flink
>  Issue Type: Wish
>  Components: Scala API
>Reporter: shijinkui
>
> TaskMessages.scala
> ConnectedStreams.scala
> DataStream.scala
> Who can fix it?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5754) released tag missing .gitigonore .travis.yml .gitattributes

2017-03-24 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15941516#comment-15941516
 ] 

shijinkui commented on FLINK-5754:
--

[~greghogan] At first, I image that the flink tag is the same with other open 
source project, so the checkout a branch from tag. It's have to reset it, we 
had gone forward too much.

If we have no special reason, can we don't delete any thing at tag release on 
the next milestone, that following the common tag/release rule?
If so, it'll be very convenient to develop private flink version. And then it 
will have no any difficult to merge to flink community code base.

Thanks



> released tag missing .gitigonore  .travis.yml .gitattributes
> 
>
> Key: FLINK-5754
> URL: https://issues.apache.org/jira/browse/FLINK-5754
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: shijinkui
>
> released tag missing .gitigonore  .travis.yml .gitattributes.
> When make a release version, should only replace the version.
> for example: https://github.com/apache/spark/tree/v2.1.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5650) Flink-python tests executing cost too long time

2017-03-17 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929946#comment-15929946
 ] 

shijinkui commented on FLINK-5650:
--

Good job, Thanks.

> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0, 1.3.0
>Reporter: shijinkui
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: osx
> Fix For: 1.3.0, 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)
> this is the jstack:
> https://gist.github.com/shijinkui/af47e8bc6c9f748336bf52efd3df94b0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5860) Replace all the file creating from java.io.tmpdir with TemporaryFolder

2017-03-16 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5860:
-
Labels: starter  (was: )

> Replace all the file creating from java.io.tmpdir with TemporaryFolder
> --
>
> Key: FLINK-5860
> URL: https://issues.apache.org/jira/browse/FLINK-5860
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: Yaroslav Mykhaylov
>  Labels: starter
>
> Search `System.getProperty("java.io.tmpdir")` in whole Flink project. It will 
> get a  Unit test list. Replace all the file creating from `java.io.tmpdir` 
> with TemporaryFolder.
> Who can fix this problem thoroughly?
> ```
> $ grep -ri 'System.getProperty("java.io.tmpdir")' .
> ./flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/example/CassandraTupleWriteAheadSinkExample.java:
>   env.setStateBackend(new FsStateBackend("file:///" + 
> System.getProperty("java.io.tmpdir") + "/flink/backend"));
> ./flink-connectors/flink-connector-kafka-0.10/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-connectors/flink-connector-kafka-0.8/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
>  File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-connectors/flink-connector-kafka-0.9/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
>  File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-contrib/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackendConfigTest.java:
>  return getMockEnvironment(new File[] { new 
> File(System.getProperty("java.io.tmpdir")) });
> ./flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java:
>public static final String DEFAULT_TASK_MANAGER_TMP_PATH = 
> System.getProperty("java.io.tmpdir");
> ./flink-core/src/test/java/org/apache/flink/api/common/io/EnumerateNestedFilesTest.java:
>   final String tempPath = System.getProperty("java.io.tmpdir");
> ./flink-core/src/test/java/org/apache/flink/testutils/TestConfigUtils.java:   
> final File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java: 
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java: 
> File tempDir = new File(System.getProperty("java.io.tmpdir"));
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/util/KMeansDataGenerator.java:
>   final String outDir = params.get("output", 
> System.getProperty("java.io.tmpdir"));
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/ml/util/LinearRegressionDataGenerator.java:
> final String tmpDir = System.getProperty("java.io.tmpdir");
> ./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java:
>   final String outPath = System.getProperty("java.io.tmpdir");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
>   File out = new File(System.getProperty("java.io.tmpdir"), 
> "jarcreatortest.jar");
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>public static final String FLINK_PYTHON_FILE_PATH = 
> System.getProperty("java.io.tmpdir") + File.separator + "flink_plan";
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>public static final String FLINK_TMP_DATA_DIR = 
> System.getProperty("java.io.tmpdir") + File.separator + "flink_data";
> ./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
>FLINK_HDFS_PATH = "file:" + 
> System.getProperty("java.io.tmpdir") + File.separator + 

[jira] [Commented] (FLINK-5754) released tag missing .gitigonore .travis.yml .gitattributes

2017-03-16 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15929365#comment-15929365
 ] 

shijinkui commented on FLINK-5754:
--

[~greghogan] I had checkout a branch from tag 1.2 in produce. In general 
release tag is the final version, not the branch. Am I right?

IMO, the release tag commit only change the project version, and shouldn't 
change any others except version number. 

I look some other apache project, they follow this rule. Can we consider this 
normal rule? Must the hidden files be deleted in the release tag?

https://github.com/apache/spark/commit/cd0a08361e2526519e7c131c42116bf56fa62c76
https://github.com/apache/hadoop/commit/94152e171178d34864ddf6362239f3c2dda0965f
https://github.com/apache/storm/commit/eac433b0beb3798c4723deb39b3c4fad446378f4

> released tag missing .gitigonore  .travis.yml .gitattributes
> 
>
> Key: FLINK-5754
> URL: https://issues.apache.org/jira/browse/FLINK-5754
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: shijinkui
>
> released tag missing .gitigonore  .travis.yml .gitattributes.
> When make a release version, should only replace the version.
> for example: https://github.com/apache/spark/tree/v2.1.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5650) Flink-python tests executing cost too long time

2017-03-16 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15927543#comment-15927543
 ] 

shijinkui commented on FLINK-5650:
--

[~Zentol], Your PR can work. flink-python UT cost one min. Very good :)

[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 01:09 min
[INFO] Finished at: 2017-03-16T13:57:12+08:00
[INFO] Final Memory: 22M/268M

> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
>  Labels: osx
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)
> this is the jstack:
> https://gist.github.com/shijinkui/af47e8bc6c9f748336bf52efd3df94b0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5650) Flink-python tests executing cost too long time

2017-03-15 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5650:
-
Labels: osx  (was: )

> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
>  Labels: osx
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)
> this is the jstack:
> https://gist.github.com/shijinkui/af47e8bc6c9f748336bf52efd3df94b0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-6060) not exist class referance in the scala function annotation

2017-03-15 Thread shijinkui (JIRA)
shijinkui created FLINK-6060:


 Summary: not exist class referance in the scala function annotation
 Key: FLINK-6060
 URL: https://issues.apache.org/jira/browse/FLINK-6060
 Project: Flink
  Issue Type: Wish
Reporter: shijinkui


TaskMessages.scala
ConnectedStreams.scala
DataStream.scala

Who can fix it?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-5650) Flink-python tests executing cost too long time

2017-03-15 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925833#comment-15925833
 ] 

shijinkui edited comment on FLINK-5650 at 3/15/17 10:11 AM:


Also we can make PythonPlanStreamer support async executing in the thread pool, 
instead of blocked single process. [~StephanEwen]

I think such bad design unit test shouldn't be merge into the code base.


was (Author: shijinkui):
Also we can make PythonPlanStreamer support async executing in the thread pool, 
instead of blocked single process. [~StephanEwen]

> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)
> this is the jstack:
> https://gist.github.com/shijinkui/af47e8bc6c9f748336bf52efd3df94b0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5650) Flink-python tests executing cost too long time

2017-03-15 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925833#comment-15925833
 ] 

shijinkui commented on FLINK-5650:
--

Also we can make PythonPlanStreamer support async executing in the thread pool, 
instead of blocked single process. [~StephanEwen]

> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)
> this is the jstack:
> https://gist.github.com/shijinkui/af47e8bc6c9f748336bf52efd3df94b0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5650) Flink-python tests executing cost too long time

2017-03-15 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925824#comment-15925824
 ] 

shijinkui commented on FLINK-5650:
--

Now I have to excute `mvn clean test -pl '!flink-libraries/flink-python'` to 
exclude flink-python module.

> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)
> this is the jstack:
> https://gist.github.com/shijinkui/af47e8bc6c9f748336bf52efd3df94b0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5754) released tag missing .gitigonore .travis.yml .gitattributes

2017-03-15 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15925816#comment-15925816
 ] 

shijinkui commented on FLINK-5754:
--

ping [~greghogan] where did you make the tag, on the Github or in the tools 
scripts?

> released tag missing .gitigonore  .travis.yml .gitattributes
> 
>
> Key: FLINK-5754
> URL: https://issues.apache.org/jira/browse/FLINK-5754
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: shijinkui
>
> released tag missing .gitigonore  .travis.yml .gitattributes.
> When make a release version, should only replace the version.
> for example: https://github.com/apache/spark/tree/v2.1.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5756) When there are many values under the same key in ListState, RocksDBStateBackend performances poor

2017-03-13 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15923530#comment-15923530
 ] 

shijinkui commented on FLINK-5756:
--

[~StephanEwen] Thank for your reply. [~SyinchwunLeo] Test the mini-benchmark 
please.
FLINK-5715 is nice.

> When there are many values under the same key in ListState, 
> RocksDBStateBackend performances poor
> -
>
> Key: FLINK-5756
> URL: https://issues.apache.org/jira/browse/FLINK-5756
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>
> When using RocksDB as the StateBackend, if there are many values under the 
> same key in ListState, the windowState.get() operator performances very poor. 
> I also the the RocksDB using version 4.11.2, the performance is also very 
> poor. The problem is likely to related to RocksDB itself's get() operator 
> after using merge(). The problem may influences the window operation's 
> performance when the size is very large using ListState. I try to merge 5 
> values under the same key in RocksDB, It costs 120 seconds to execute get() 
> operation.
> ///
> The flink's code is as follows:
> {code}
> class SEventSource extends RichSourceFunction [SEvent] {
>   private var count = 0L
>   private val alphabet = 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWZYX0987654321"
>   override def run(sourceContext: SourceContext[SEvent]): Unit = {
> while (true) {
>   for (i <- 0 until 5000) {
> sourceContext.collect(SEvent(1, "hello-"+count, alphabet,1))
> count += 1L
>   }
>   Thread.sleep(1000)
> }
>   }
> }
> env.addSource(new SEventSource)
>   .assignTimestampsAndWatermarks(new 
> AssignerWithPeriodicWatermarks[SEvent] {
> override def getCurrentWatermark: Watermark = {
>   new Watermark(System.currentTimeMillis())
> }
> override def extractTimestamp(t: SEvent, l: Long): Long = {
>   System.currentTimeMillis()
> }
>   })
>   .keyBy(0)
>   .window(SlidingEventTimeWindows.of(Time.seconds(20), Time.seconds(2)))
>   .apply(new WindowStatistic)
>   .map(x => (System.currentTimeMillis(), x))
>   .print()
> {code}
> 
> The RocksDB Test code:
> {code}
> val stringAppendOperator = new StringAppendOperator
> val options = new Options()
> options.setCompactionStyle(CompactionStyle.LEVEL)
>   .setCompressionType(CompressionType.SNAPPY_COMPRESSION)
>   .setLevelCompactionDynamicLevelBytes(true)
>   .setIncreaseParallelism(4)
>   .setUseFsync(true)
>   .setMaxOpenFiles(-1)
>   .setCreateIfMissing(true)
>   .setMergeOperator(stringAppendOperator)
> val write_options = new WriteOptions
> write_options.setSync(false)
> val rocksDB = RocksDB.open(options, "/**/Data/")
> val key = "key"
> val value = 
> "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ7890654321"
> val beginmerge = System.currentTimeMillis()
> for(i <- 0 to 5) {
>   rocksDB.merge(key.getBytes(), ("s"+ i + value).getBytes())
>   //rocksDB.put(key.getBytes, value.getBytes)
> }
> println("finish")
> val begin = System.currentTimeMillis()
> rocksDB.get(key.getBytes)
> val end = System.currentTimeMillis()
> println("merge cost:" + (begin - beginmerge))
> println("Time consuming:" + (end - begin))
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5756) When there are many values under the same key in ListState, RocksDBStateBackend performances poor

2017-03-13 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15907043#comment-15907043
 ] 

shijinkui commented on FLINK-5756:
--

hi, [~StephanEwen]  
Do we have some tuning technique about this problem originated RocksDB get()?

> When there are many values under the same key in ListState, 
> RocksDBStateBackend performances poor
> -
>
> Key: FLINK-5756
> URL: https://issues.apache.org/jira/browse/FLINK-5756
> Project: Flink
>  Issue Type: Improvement
>  Components: State Backends, Checkpointing
>Affects Versions: 1.2.0
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>
> When using RocksDB as the StateBackend, if there are many values under the 
> same key in ListState, the windowState.get() operator performances very poor. 
> I also the the RocksDB using version 4.11.2, the performance is also very 
> poor. The problem is likely to related to RocksDB itself's get() operator 
> after using merge(). The problem may influences the window operation's 
> performance when the size is very large using ListState. I try to merge 5 
> values under the same key in RocksDB, It costs 120 seconds to execute get() 
> operation.
> ///
> The flink's code is as follows:
> class SEventSource extends RichSourceFunction [SEvent] {
>   private var count = 0L
>   private val alphabet = 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWZYX0987654321"
>   override def run(sourceContext: SourceContext[SEvent]): Unit = {
> while (true) {
>   for (i <- 0 until 5000) {
> sourceContext.collect(SEvent(1, "hello-"+count, alphabet,1))
> count += 1L
>   }
>   Thread.sleep(1000)
> }
>   }
> }
> env.addSource(new SEventSource)
>   .assignTimestampsAndWatermarks(new 
> AssignerWithPeriodicWatermarks[SEvent] {
> override def getCurrentWatermark: Watermark = {
>   new Watermark(System.currentTimeMillis())
> }
> override def extractTimestamp(t: SEvent, l: Long): Long = {
>   System.currentTimeMillis()
> }
>   })
>   .keyBy(0)
>   .window(SlidingEventTimeWindows.of(Time.seconds(20), Time.seconds(2)))
>   .apply(new WindowStatistic)
>   .map(x => (System.currentTimeMillis(), x))
>   .print()
> 
> The RocksDB Test code:
> val stringAppendOperator = new StringAppendOperator
> val options = new Options()
> options.setCompactionStyle(CompactionStyle.LEVEL)
>   .setCompressionType(CompressionType.SNAPPY_COMPRESSION)
>   .setLevelCompactionDynamicLevelBytes(true)
>   .setIncreaseParallelism(4)
>   .setUseFsync(true)
>   .setMaxOpenFiles(-1)
>   .setCreateIfMissing(true)
>   .setMergeOperator(stringAppendOperator)
> val write_options = new WriteOptions
> write_options.setSync(false)
> val rocksDB = RocksDB.open(options, "/**/Data/")
> val key = "key"
> val value = 
> "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ7890654321"
> val beginmerge = System.currentTimeMillis()
> for(i <- 0 to 5) {
>   rocksDB.merge(key.getBytes(), ("s"+ i + value).getBytes())
>   //rocksDB.put(key.getBytes, value.getBytes)
> }
> println("finish")
> val begin = System.currentTimeMillis()
> rocksDB.get(key.getBytes)
> val end = System.currentTimeMillis()
> println("merge cost:" + (begin - beginmerge))
> println("Time consuming:" + (end - begin))
>   }
> }



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5902) Some images can not show in IE

2017-03-05 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5902:
-
Issue Type: Sub-task  (was: Bug)
Parent: FLINK-5839

> Some images can not show in IE
> --
>
> Key: FLINK-5902
> URL: https://issues.apache.org/jira/browse/FLINK-5902
> Project: Flink
>  Issue Type: Sub-task
>  Components: Webfrontend
> Environment: IE
>Reporter: Tao Wang
> Attachments: chrome is ok.png, IE 11 with problem.png
>
>
> Some images in the Overview page can not show in IE, as it is good in Chrome.
> I'm using IE 11, but think same with IE9 I'll paste the screenshot 
> later.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-5818) change checkpoint dir permission to 700 for security reason

2017-02-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-5818:


Assignee: Tao Wang

> change checkpoint dir permission to 700 for security reason
> ---
>
> Key: FLINK-5818
> URL: https://issues.apache.org/jira/browse/FLINK-5818
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security, State Backends, Checkpointing
>Reporter: Tao Wang
>Assignee: Tao Wang
>
> Now checkpoint directory is made w/o specified permission, so it is easy for 
> another user to delete or read files under it, which will cause restore 
> failure or information leak.
> It's better to lower it down to 700.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5546) java.io.tmpdir setted as project build directory in surefire plugin

2017-02-21 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-5546.

Resolution: Duplicate

resolved in FLINK-5817

> java.io.tmpdir setted as project build directory in surefire plugin
> ---
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5780) Extend ConfigOption with descriptions

2017-02-20 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15875418#comment-15875418
 ] 

shijinkui commented on FLINK-5780:
--

Just sound like a extension of apache common-cli. 
https://commons.apache.org/proper/commons-cli/
IMO, commonk-cli style is the standard, i like it.
Is that so?

> Extend ConfigOption with descriptions
> -
>
> Key: FLINK-5780
> URL: https://issues.apache.org/jira/browse/FLINK-5780
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core, Documentation
>Reporter: Ufuk Celebi
>
> The {{ConfigOption}} type is meant to replace the flat {{ConfigConstants}}. 
> As part of automating the generation of a docs config page we need to extend  
> {{ConfigOption}} with description fields.
> From the ML discussion, these could be:
> {code}
> void shortDescription(String);
> void longDescription(String);
> {code}
> In practice, the description string should contain HTML/Markdown.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5860) Replace all the file creating from java.io.tmpdir with TemporaryFolder

2017-02-20 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5860:
-
Description: 
Search `System.getProperty("java.io.tmpdir")` in whole Flink project. It will 
get a  Unit test list. Replace all the file creating from `java.io.tmpdir` with 
TemporaryFolder.

Who can fix this problem thoroughly?

```

$ grep -ri 'System.getProperty("java.io.tmpdir")' .
./flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/example/CassandraTupleWriteAheadSinkExample.java:
env.setStateBackend(new FsStateBackend("file:///" + 
System.getProperty("java.io.tmpdir") + "/flink/backend"));
./flink-connectors/flink-connector-kafka-0.10/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
  File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-connectors/flink-connector-kafka-0.8/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
   File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-connectors/flink-connector-kafka-0.9/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
   File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-contrib/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackendConfigTest.java:
   return getMockEnvironment(new File[] { new 
File(System.getProperty("java.io.tmpdir")) });
./flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java: 
public static final String DEFAULT_TASK_MANAGER_TMP_PATH = 
System.getProperty("java.io.tmpdir");
./flink-core/src/test/java/org/apache/flink/api/common/io/EnumerateNestedFilesTest.java:
final String tempPath = System.getProperty("java.io.tmpdir");
./flink-core/src/test/java/org/apache/flink/testutils/TestConfigUtils.java: 
final File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java:   
File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java:   
File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/util/KMeansDataGenerator.java:
final String outDir = params.get("output", 
System.getProperty("java.io.tmpdir"));
./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/ml/util/LinearRegressionDataGenerator.java:
  final String tmpDir = System.getProperty("java.io.tmpdir");
./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java:
final String outPath = System.getProperty("java.io.tmpdir");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
 public static final String FLINK_PYTHON_FILE_PATH = 
System.getProperty("java.io.tmpdir") + File.separator + "flink_plan";
./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
 public static final String FLINK_TMP_DATA_DIR = 
System.getProperty("java.io.tmpdir") + File.separator + "flink_data";
./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
 FLINK_HDFS_PATH = "file:" + 
System.getProperty("java.io.tmpdir") + File.separator + "flink";
./flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobUtils.java: 
baseDir = new File(System.getProperty("java.io.tmpdir"));
./flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java:
return System.getProperty("java.io.tmpdir");
./flink-runtime/src/main/java/org/apache/flink/runtime/zookeeper/FlinkZooKeeperQuorumPeer.java:
 System.getProperty("java.io.tmpdir"), 
UUID.randomUUID().toString());

[jira] [Created] (FLINK-5860) Replace all the file creating from java.io.tmpdir with TemporaryFolder

2017-02-20 Thread shijinkui (JIRA)
shijinkui created FLINK-5860:


 Summary: Replace all the file creating from java.io.tmpdir with 
TemporaryFolder
 Key: FLINK-5860
 URL: https://issues.apache.org/jira/browse/FLINK-5860
 Project: Flink
  Issue Type: Test
  Components: Tests
Reporter: shijinkui


Search `System.getProperty("java.io.tmpdir")` whole Flink project. It will get 
a  Unit test list. Replace all the file creating from `java.io.tmpdir` with 
TemporaryFolder.

Who can fix this problem thoroughly?

```

$ grep -ri 'System.getProperty("java.io.tmpdir")' .
./flink-connectors/flink-connector-cassandra/src/test/java/org/apache/flink/streaming/connectors/cassandra/example/CassandraTupleWriteAheadSinkExample.java:
env.setStateBackend(new FsStateBackend("file:///" + 
System.getProperty("java.io.tmpdir") + "/flink/backend"));
./flink-connectors/flink-connector-kafka-0.10/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
  File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-connectors/flink-connector-kafka-0.8/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
   File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-connectors/flink-connector-kafka-0.9/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaTestEnvironmentImpl.java:
   File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-contrib/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackendConfigTest.java:
   return getMockEnvironment(new File[] { new 
File(System.getProperty("java.io.tmpdir")) });
./flink-core/src/main/java/org/apache/flink/configuration/ConfigConstants.java: 
public static final String DEFAULT_TASK_MANAGER_TMP_PATH = 
System.getProperty("java.io.tmpdir");
./flink-core/src/test/java/org/apache/flink/api/common/io/EnumerateNestedFilesTest.java:
final String tempPath = System.getProperty("java.io.tmpdir");
./flink-core/src/test/java/org/apache/flink/testutils/TestConfigUtils.java: 
final File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java:   
File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-core/src/test/java/org/apache/flink/testutils/TestFileUtils.java:   
File tempDir = new File(System.getProperty("java.io.tmpdir"));
./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/clustering/util/KMeansDataGenerator.java:
final String outDir = params.get("output", 
System.getProperty("java.io.tmpdir"));
./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/ml/util/LinearRegressionDataGenerator.java:
  final String tmpDir = System.getProperty("java.io.tmpdir");
./flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java:
final String outPath = System.getProperty("java.io.tmpdir");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-java8/src/test/java/org/apache/flink/runtime/util/JarFileCreatorLambdaTest.java:
File out = new File(System.getProperty("java.io.tmpdir"), 
"jarcreatortest.jar");
./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
 public static final String FLINK_PYTHON_FILE_PATH = 
System.getProperty("java.io.tmpdir") + File.separator + "flink_plan";
./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
 public static final String FLINK_TMP_DATA_DIR = 
System.getProperty("java.io.tmpdir") + File.separator + "flink_data";
./flink-libraries/flink-python/src/main/java/org/apache/flink/python/api/PythonPlanBinder.java:
 FLINK_HDFS_PATH = "file:" + 
System.getProperty("java.io.tmpdir") + File.separator + "flink";
./flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobUtils.java: 
baseDir = new File(System.getProperty("java.io.tmpdir"));
./flink-runtime/src/main/java/org/apache/flink/runtime/util/EnvironmentInformation.java:
return System.getProperty("java.io.tmpdir");
./flink-runtime/src/main/java/org/apache/flink/runtime/zookeeper/FlinkZooKeeperQuorumPeer.java:
  

[jira] [Updated] (FLINK-5818) change checkpoint dir permission to 700 for security reason

2017-02-17 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5818:
-
Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-5839

> change checkpoint dir permission to 700 for security reason
> ---
>
> Key: FLINK-5818
> URL: https://issues.apache.org/jira/browse/FLINK-5818
> Project: Flink
>  Issue Type: Sub-task
>  Components: Security, State Backends, Checkpointing
>Reporter: Tao Wang
>
> Now checkpoint directory is made w/o specified permission, so it is easy for 
> another user to delete or read files under it, which will cause restore 
> failure or information leak.
> It's better to lower it down to 700.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5546) java.io.tmpdir setted as project build directory in surefire plugin

2017-02-17 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5546:
-
Issue Type: Sub-task  (was: Test)
Parent: FLINK-5839

> java.io.tmpdir setted as project build directory in surefire plugin
> ---
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5640) configure the explicit Unit Test file suffix

2017-02-17 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5640:
-
Issue Type: Sub-task  (was: Test)
Parent: FLINK-5839

> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5839) Flink Security problem collection

2017-02-17 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5839:
-
Summary: Flink Security problem collection  (was: Flink Security in 
Huawei's use case)

> Flink Security problem collection
> -
>
> Key: FLINK-5839
> URL: https://issues.apache.org/jira/browse/FLINK-5839
> Project: Flink
>  Issue Type: Improvement
>Reporter: shijinkui
>
> This issue collect some security problem found in huawei's use case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5839) Flink Security in Huawei's use case

2017-02-17 Thread shijinkui (JIRA)
shijinkui created FLINK-5839:


 Summary: Flink Security in Huawei's use case
 Key: FLINK-5839
 URL: https://issues.apache.org/jira/browse/FLINK-5839
 Project: Flink
  Issue Type: Improvement
Reporter: shijinkui


This issue collect some security problem found in huawei's use case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5817) Fix test concurrent execution failure by test dir conflicts.

2017-02-16 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15871082#comment-15871082
 ] 

shijinkui commented on FLINK-5817:
--

[~StephanEwen] [~wenlong.lwl], there no overlapping between FLINK-5546 and 
FLINK-5817. In FLINK-5546, I'm focus on changing the default java.io.tmp system 
property.
I think this issue can find out all the temporary directory creating by new 
File, and replace all with TemporaryFolder following Stephan's suggestion.
You can find out all the *Test.* files and search the keyword "new File(", then 
you'll find there so much bad smell need to be re-corrected.

> Fix test concurrent execution failure by test dir conflicts.
> 
>
> Key: FLINK-5817
> URL: https://issues.apache.org/jira/browse/FLINK-5817
> Project: Flink
>  Issue Type: Bug
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>
> Currently when different users build flink on the same machine, failure may 
> happen because some test utilities create test file using the fixed name, 
> which will cause file access failing when different user processing the same 
> file at the same time.
> We have found errors from AbstractTestBase, IOManagerTest, FileCacheTest.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5806) TaskExecutionState toString format have wrong key

2017-02-15 Thread shijinkui (JIRA)
shijinkui created FLINK-5806:


 Summary: TaskExecutionState toString format have wrong key
 Key: FLINK-5806
 URL: https://issues.apache.org/jira/browse/FLINK-5806
 Project: Flink
  Issue Type: Bug
Reporter: shijinkui
Assignee: shijinkui


The key of jobID should be executionId in the string format.

public String toString() {
return String.format("TaskState jobId=%s, jobID=%s, state=%s, 
error=%s",
jobID, executionId, executionState,
throwable == null ? "(null)" : 
throwable.toString());
}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5650) Flink-python tests executing cost too long time

2017-02-10 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5650:
-
Description: 
When execute `mvn clean test` in flink-python, it will wait more than half hour 
after the console output below:
---
 T E S T S
---
Running org.apache.flink.python.api.PythonPlanBinderTest
log4j:WARN No appenders could be found for logger 
(org.apache.flink.python.api.PythonPlanBinderTest).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.



The stack below:
"main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
[0x79fd8000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
at 
org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
at 
org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
at 
org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
at 
org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
at 
org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
at 
org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)

this is the jstack:
https://gist.github.com/shijinkui/af47e8bc6c9f748336bf52efd3df94b0

  was:
When execute `mvn clean test` in flink-python, it will wait more than half hour 
after the console output below:
---
 T E S T S
---
Running org.apache.flink.python.api.PythonPlanBinderTest
log4j:WARN No appenders could be found for logger 
(org.apache.flink.python.api.PythonPlanBinderTest).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.



The stack below:
"main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
[0x79fd8000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
at 
org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
at 
org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
at 
org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
at 
org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
at 
org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
at 
org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)


> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> 

[jira] [Updated] (FLINK-5650) Flink-python tests executing cost too long time

2017-02-10 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5650:
-
Summary: Flink-python tests executing cost too long time  (was: 
Flink-python tests can time out)

> Flink-python tests executing cost too long time
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-4252) Table program cannot be compiled

2017-02-09 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860886#comment-15860886
 ] 

shijinkui commented on FLINK-4252:
--

org.apache.flink.table.examples.java.WordCountTable.java also can be executed. 
I'll raise a new issue.

```
Table program cannot be compiled. This is a bug. Please file an issue.
org.apache.flink.table.codegen.Compiler$class.compile(Compiler.scala:36)
```

> Table program cannot be compiled
> 
>
> Key: FLINK-4252
> URL: https://issues.apache.org/jira/browse/FLINK-4252
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.1.0
> Environment: OS X EI Captain
> scala 2.11.7
> jdk 8
>Reporter: Renkai Ge
>Assignee: Timo Walther
> Fix For: 1.2.0
>
> Attachments: TestMain.scala
>
>
> I'm trying the table apis.
> I got some errors like this
> My code is in the attachments
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:413)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:92)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:389)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:376)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:61)
>   at 
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:896)
>   at org.apache.flink.api.java.DataSet.collect(DataSet.java:410)
>   at org.apache.flink.api.java.DataSet.print(DataSet.java:1605)
>   at org.apache.flink.api.scala.DataSet.print(DataSet.scala:1672)
>   at TestMain$.main(TestMain.scala:31)
>   at TestMain.main(TestMain.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:509)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:403)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:331)
>   at 
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:777)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:253)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1005)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1048)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply$mcV$sp(JobManager.scala:853)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:799)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$7.apply(JobManager.scala:799)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.Exception: The user defined 'open(Configuration)' method 
> in class org.apache.flink.api.table.runtime.FlatMapRunner caused an 
> exception: Table program cannot be compiled. This is a bug. Please file an 
> issue.
>   at 
> org.apache.flink.runtime.operators.BatchTask.openUserCode(BatchTask.java:1337)
>   at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.openTask(ChainedFlatMapDriver.java:47)
>   at 
> 

[jira] [Updated] (FLINK-5546) java.io.tmpdir setted as project build directory in surefire plugin

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5546:
-
Affects Version/s: (was: 1.3.0)
   (was: 1.2.0)

> java.io.tmpdir setted as project build directory in surefire plugin
> ---
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>  Components: Build System
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5546) java.io.tmpdir setted as project build directory in surefire plugin

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5546:
-
Fix Version/s: 1.2.1

> java.io.tmpdir setted as project build directory in surefire plugin
> ---
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>  Components: Build System
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-4562) table examples make an divided module in flink-examples

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-4562:


Assignee: shijinkui

> table examples make an divided module in flink-examples
> ---
>
> Key: FLINK-4562
> URL: https://issues.apache.org/jira/browse/FLINK-4562
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples, Table API & SQL
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> example code should't packaged in table module.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5217) Deprecated interface Checkpointed make clear suggestion

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5217:
-
Fix Version/s: 1.2.1
  Description: 
package org.apache.flink.streaming.api.checkpoint;
@Deprecated
@PublicEvolving
public interface Checkpointed extends 
CheckpointedRestoring

this interface should have clear suggestion which version to give up this 
interface, and which interface can instead of it.

  was:

package org.apache.flink.streaming.api.checkpoint;
@Deprecated
@PublicEvolving
public interface Checkpointed extends 
CheckpointedRestoring

this interface should have clear suggestion which version to give up this 
interface, and which interface can instead of it.


> Deprecated interface Checkpointed make clear suggestion
> ---
>
> Key: FLINK-5217
> URL: https://issues.apache.org/jira/browse/FLINK-5217
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: shijinkui
> Fix For: 1.2.1
>
>
> package org.apache.flink.streaming.api.checkpoint;
> @Deprecated
> @PublicEvolving
> public interface Checkpointed extends 
> CheckpointedRestoring
> this interface should have clear suggestion which version to give up this 
> interface, and which interface can instead of it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-4562) table examples make an divided module in flink-examples

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-4562:
-
Fix Version/s: 1.2.1

> table examples make an divided module in flink-examples
> ---
>
> Key: FLINK-4562
> URL: https://issues.apache.org/jira/browse/FLINK-4562
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples, Table API & SQL
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> example code should't packaged in table module.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5640) configure the explicit Unit Test file suffix

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5640:
-
Affects Version/s: (was: 1.2.1)
Fix Version/s: 1.2.1

> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5650) Flink-python tests can time out

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5650:
-
Affects Version/s: 1.2.0
 Priority: Critical  (was: Major)
Fix Version/s: 1.2.1

> Flink-python tests can time out
> ---
>
> Key: FLINK-5650
> URL: https://issues.apache.org/jira/browse/FLINK-5650
> Project: Flink
>  Issue Type: Bug
>  Components: Python API, Tests
>Affects Versions: 1.2.0
>Reporter: shijinkui
>Priority: Critical
> Fix For: 1.2.1
>
>
> When execute `mvn clean test` in flink-python, it will wait more than half 
> hour after the console output below:
> ---
>  T E S T S
> ---
> Running org.apache.flink.python.api.PythonPlanBinderTest
> log4j:WARN No appenders could be found for logger 
> (org.apache.flink.python.api.PythonPlanBinderTest).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
> info.
> The stack below:
> "main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
> [0x79fd8000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
>   at 
> org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
>   at 
> org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
>   at 
> org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
>   at 
> org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (FLINK-5705) webmonitor's request/response use UTF-8 explicitly

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5705:
-
Fix Version/s: 1.2.1

> webmonitor's request/response use UTF-8 explicitly
> --
>
> Key: FLINK-5705
> URL: https://issues.apache.org/jira/browse/FLINK-5705
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: shijinkui
>Assignee: shijinkui
> Fix For: 1.2.1
>
>
> QueryStringDecoder and HttpPostRequestDecoder use UTF-8 defined in flink.
> Response set content-encoding header with utf-8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (FLINK-5705) webmonitor's request/response use UTF-8 explicitly

2017-02-09 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-5705:


Assignee: shijinkui

> webmonitor's request/response use UTF-8 explicitly
> --
>
> Key: FLINK-5705
> URL: https://issues.apache.org/jira/browse/FLINK-5705
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Reporter: shijinkui
>Assignee: shijinkui
>
> QueryStringDecoder and HttpPostRequestDecoder use UTF-8 defined in flink.
> Response set content-encoding header with utf-8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-5754) released tag missing .gitigonore .travis.yml .gitattributes

2017-02-09 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860629#comment-15860629
 ] 

shijinkui commented on FLINK-5754:
--

I need checkout a branch from 1.2.0 tag. And work on this branch forward, then 
merge commit from master or cherry-pic some commit to master branch.

> released tag missing .gitigonore  .travis.yml .gitattributes
> 
>
> Key: FLINK-5754
> URL: https://issues.apache.org/jira/browse/FLINK-5754
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: shijinkui
>
> released tag missing .gitigonore  .travis.yml .gitattributes.
> When make a release version, should only replace the version.
> for example: https://github.com/apache/spark/tree/v2.1.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5754) released tag missing .gitigonore .travis.yml .gitattributes

2017-02-09 Thread shijinkui (JIRA)
shijinkui created FLINK-5754:


 Summary: released tag missing .gitigonore  .travis.yml 
.gitattributes
 Key: FLINK-5754
 URL: https://issues.apache.org/jira/browse/FLINK-5754
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: shijinkui


released tag missing .gitigonore  .travis.yml .gitattributes.
When make a release version, should only replace the version.
for example: https://github.com/apache/spark/tree/v2.1.0




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5705) webmonitor's request/response use UTF-8 explicitly

2017-02-03 Thread shijinkui (JIRA)
shijinkui created FLINK-5705:


 Summary: webmonitor's request/response use UTF-8 explicitly
 Key: FLINK-5705
 URL: https://issues.apache.org/jira/browse/FLINK-5705
 Project: Flink
  Issue Type: Improvement
  Components: Webfrontend
Reporter: shijinkui


QueryStringDecoder and HttpPostRequestDecoder use UTF-8 defined in flink.

Response set content-encoding header with utf-8



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (FLINK-5166) TextInputFormatTest.testNestedFileRead

2017-02-02 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-5166.

Resolution: Won't Fix

> TextInputFormatTest.testNestedFileRead
> --
>
> Key: FLINK-5166
> URL: https://issues.apache.org/jira/browse/FLINK-5166
> Project: Flink
>  Issue Type: Bug
>  Components: Batch Connectors and Input/Output Formats, Tests
>Reporter: shijinkui
>
> `mvn clean package -P \!scala-2.11,scala-2.11  -U`
> Failed tests:
>   TextInputFormatTest.testNestedFileRead:140 Test erroneous
> Tests run: 846, Failures: 1, Errors: 0, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5650) flink-python unit test costs more than half hour

2017-01-25 Thread shijinkui (JIRA)
shijinkui created FLINK-5650:


 Summary: flink-python unit test costs more than half hour
 Key: FLINK-5650
 URL: https://issues.apache.org/jira/browse/FLINK-5650
 Project: Flink
  Issue Type: Bug
  Components: Python API
Reporter: shijinkui


When execute `mvn clean test` in flink-python, it will wait more than half hour 
after the console output below:
---
 T E S T S
---
Running org.apache.flink.python.api.PythonPlanBinderTest
log4j:WARN No appenders could be found for logger 
(org.apache.flink.python.api.PythonPlanBinderTest).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.



The stack below:
"main" prio=5 tid=0x7f8d7780b800 nid=0x1c03 waiting on condition 
[0x79fd8000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at 
org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.startPython(PythonPlanStreamer.java:70)
at 
org.apache.flink.python.api.streaming.plan.PythonPlanStreamer.open(PythonPlanStreamer.java:50)
at 
org.apache.flink.python.api.PythonPlanBinder.startPython(PythonPlanBinder.java:211)
at 
org.apache.flink.python.api.PythonPlanBinder.runPlan(PythonPlanBinder.java:141)
at 
org.apache.flink.python.api.PythonPlanBinder.main(PythonPlanBinder.java:114)
at 
org.apache.flink.python.api.PythonPlanBinderTest.testProgram(PythonPlanBinderTest.java:83)
at 
org.apache.flink.test.util.JavaProgramTestBase.testJobWithoutObjectReuse(JavaProgramTestBase.java:174)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5640) configure the explicit Unit Test file suffix

2017-01-25 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5640:
-
Affects Version/s: 1.2.1

> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.2.1
>Reporter: shijinkui
>Assignee: shijinkui
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5572) ListState in SlidingEventTimeWindow and SlidingProcessingTimeWindow optimization

2017-01-25 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5572:
-
Affects Version/s: (was: 1.1.4)
   (was: 1.2.0)
   1.2.1

> ListState in SlidingEventTimeWindow and SlidingProcessingTimeWindow 
> optimization
> 
>
> Key: FLINK-5572
> URL: https://issues.apache.org/jira/browse/FLINK-5572
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.1
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>
> When using ListState in SlidingEventTimeWindow and 
> SlidingProcessingTimeWindow, an element  may be assigned to multiple 
> overlapped windows. It may lead to storage consuming. for example, 
> window(SlidingEventTimeWindows.of(Time.seconds(10), 
> Time.seconds(2))).apply(UDF window function), each element is assigned to 5 
> windows, When the window size is very large, it is unacceptable(size/slide is 
> very large).
> We plan to make a little optimization, and the doc is in 
> https://docs.google.com/document/d/1HCt1Si3YNGFwsl2H5SO0f7WD69DdBBPVJA6abd3oFWo/edit?usp=sharing
>  
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5572) ListState in SlidingEventTimeWindow and SlidingProcessingTimeWindow optimization

2017-01-25 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837925#comment-15837925
 ] 

shijinkui commented on FLINK-5572:
--

IMO, this proposal will save several times' memory.  Syinchwun Leo have some 
demo to implements it. The result effect is perfect.
I think we can improve ListState with the everybody's suggestion.

> ListState in SlidingEventTimeWindow and SlidingProcessingTimeWindow 
> optimization
> 
>
> Key: FLINK-5572
> URL: https://issues.apache.org/jira/browse/FLINK-5572
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.2.0, 1.1.4
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>
> When using ListState in SlidingEventTimeWindow and 
> SlidingProcessingTimeWindow, an element  may be assigned to multiple 
> overlapped windows. It may lead to storage consuming. for example, 
> window(SlidingEventTimeWindows.of(Time.seconds(10), 
> Time.seconds(2))).apply(UDF window function), each element is assigned to 5 
> windows, When the window size is very large, it is unacceptable(size/slide is 
> very large).
> We plan to make a little optimization, and the doc is in 
> https://docs.google.com/document/d/1HCt1Si3YNGFwsl2H5SO0f7WD69DdBBPVJA6abd3oFWo/edit?usp=sharing
>  
> Comments?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5640) configure the explicit Unit Test file suffix

2017-01-25 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5640:
-
Description: 
There are four types of Unit Test file: *ITCase.java, *Test.java, 
*ITSuite.scala, *Suite.scala
File name ending with "IT.java" is integration test. File name ending with 
"Test.java"  is unit test.

It's clear for Surefire plugin of default-test execution to declare that 
"*Test.*" is Java Unit Test.

The test file statistics below:
* Suite  total: 10
* ITCase  total: 378
* Test  total: 1008
* ITSuite  total: 14

  was:

There are four types of Unit Test file: *ITCase.java, *Test.java, 
*ITSuite.scala, *Suite.scala
File name ending with "IT.java" is integration test. File name ending with 
"Test.java"  is unit test.

It's clear for Surefire plugin of default-test execution to declare that 
"*Test.*" is Java Unit Test.

* Suite  total: 10
* ITCase  total: 378
* Test  total: 1008
* ITSuite  total: 14


> configure the explicit Unit Test file suffix
> 
>
> Key: FLINK-5640
> URL: https://issues.apache.org/jira/browse/FLINK-5640
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>Assignee: shijinkui
>
> There are four types of Unit Test file: *ITCase.java, *Test.java, 
> *ITSuite.scala, *Suite.scala
> File name ending with "IT.java" is integration test. File name ending with 
> "Test.java"  is unit test.
> It's clear for Surefire plugin of default-test execution to declare that 
> "*Test.*" is Java Unit Test.
> The test file statistics below:
> * Suite  total: 10
> * ITCase  total: 378
> * Test  total: 1008
> * ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5640) configure the explicit Unit Test file suffix

2017-01-25 Thread shijinkui (JIRA)
shijinkui created FLINK-5640:


 Summary: configure the explicit Unit Test file suffix
 Key: FLINK-5640
 URL: https://issues.apache.org/jira/browse/FLINK-5640
 Project: Flink
  Issue Type: Test
  Components: Tests
Reporter: shijinkui
Assignee: shijinkui



There are four types of Unit Test file: *ITCase.java, *Test.java, 
*ITSuite.scala, *Suite.scala
File name ending with "IT.java" is integration test. File name ending with 
"Test.java"  is unit test.

It's clear for Surefire plugin of default-test execution to declare that 
"*Test.*" is Java Unit Test.

* Suite  total: 10
* ITCase  total: 378
* Test  total: 1008
* ITSuite  total: 14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5546) java.io.tmpdir setted as project build directory in surefire plugin

2017-01-22 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5546:
-
Summary: java.io.tmpdir setted as project build directory in surefire 
plugin  (was: When multiple users run test, /tmp/cacheFile conflicts)

> java.io.tmpdir setted as project build directory in surefire plugin
> ---
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>Affects Versions: 1.2.0, 1.3.0
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5546) When multiple users run test, /tmp/cacheFile conflicts

2017-01-22 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-5546:


Assignee: shijinkui

> When multiple users run test, /tmp/cacheFile conflicts
> --
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>Affects Versions: 1.2.0, 1.3.0
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5546) When multiple users run test, /tmp/cacheFile conflicts

2017-01-22 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5546:
-
Assignee: (was: shijinkui)

> When multiple users run test, /tmp/cacheFile conflicts
> --
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>Affects Versions: 1.2.0, 1.3.0
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (FLINK-5546) When multiple users run test, /tmp/cacheFile conflicts

2017-01-22 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui reassigned FLINK-5546:


Assignee: shijinkui

> When multiple users run test, /tmp/cacheFile conflicts
> --
>
> Key: FLINK-5546
> URL: https://issues.apache.org/jira/browse/FLINK-5546
> Project: Flink
>  Issue Type: Test
>Affects Versions: 1.2.0, 1.3.0
> Environment: CentOS 7.2
>Reporter: Syinchwun Leo
>Assignee: shijinkui
>
> When multiple Linux users run test at the same time, flink-runtime module may 
> fail. User A creates /tmp/cacheFile, and User B will have no permission to 
> visit the fold.  
> Failed tests: 
> FileCacheDeleteValidationTest.setup:79 Error initializing the test: 
> /tmp/cacheFile (Permission denied)
> Tests in error: 
> IOManagerTest.channelEnumerator:54 » Runtime Could not create storage 
> director...
> Tests run: 1385, Failures: 1, Errors: 1, Skipped: 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5543) customCommandLine tips in CliFrontend

2017-01-17 Thread shijinkui (JIRA)
shijinkui created FLINK-5543:


 Summary: customCommandLine tips in CliFrontend
 Key: FLINK-5543
 URL: https://issues.apache.org/jira/browse/FLINK-5543
 Project: Flink
  Issue Type: Improvement
  Components: Client
Reporter: shijinkui


Tips: DefaultCLI must be added at the end, because 
getActiveCustomCommandLine(..) will get the active CustomCommandLine in order 
and DefaultCLI isActive always return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5519) scala-maven-plugin version all change to 3.2.2

2017-01-16 Thread shijinkui (JIRA)
shijinkui created FLINK-5519:


 Summary: scala-maven-plugin version all change to 3.2.2
 Key: FLINK-5519
 URL: https://issues.apache.org/jira/browse/FLINK-5519
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Reporter: shijinkui


1. scala-maven-plugin version all change to 3.2.2 in all module
2. parent pom version change to apache-18 from apache-14



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-4519) scala maxLineLength increased to 120

2017-01-08 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-4519.

Resolution: Won't Fix

> scala maxLineLength increased to 120  
> --
>
> Key: FLINK-4519
> URL: https://issues.apache.org/jira/browse/FLINK-4519
> Project: Flink
>  Issue Type: Improvement
>Reporter: shijinkui
>
> `tools/maven/scalastyle-config.xml`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2017-01-08 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810436#comment-15810436
 ] 

shijinkui commented on FLINK-5370:
--

```
git checkout bfdaa3821c71f9fa3a3ff85f56154995d98b18b5
mvn clean -U -X
```

Run the test several times, have two different result below. 

Results 1:
==

Failed tests:
  FileCacheDeleteValidationTest.testFileReuseForNextTask:132 null
  LeaderChangeStateCleanupTest.testReelectionOfSameJobManager:244 TaskManager 
should not be able to register at JobManager.
Tests in error:
  JobClientActorTest.testConnectionTimeoutAfterJobSubmission »  Unexpected 
excep...

Tests run: 1266, Failures: 2, Errors: 1, Skipped: 3


Result 2:
=

error while sending record to Kafka: Test error
java.lang.Exception: Test error
  at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTest$1.answer(KafkaProducerTest.java:72)
  at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTest$1.answer(KafkaProducerTest.java:68)
  at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
  at 
org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
  at 
org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
  at 
org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
  at 
org.powermock.api.mockito.repackaged.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:60)
  at 
org.apache.kafka.clients.producer.KafkaProducer$$EnhancerByMockitoWithCGLIB$$531e9886.send()
  at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.invoke(FlinkKafkaProducerBase.java:312)
  at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:38)
  at 
org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness.processElement(OneInputStreamOperatorTestHarness.java:90)
  at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTest.testPropagateExceptions(KafkaProducerTest.java:113)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
  at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:316)

...


java.lang.Exception: Test error
  at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTest$1.answer(KafkaProducerTest.java:77)
  at 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTest$1.answer(KafkaProducerTest.java:73)
  at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:34)
  at 
org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:91)
  at 
org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29)
  at 
org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:38)
  at 
org.powermock.api.mockito.repackaged.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:60)
  at 
org.apache.kafka.clients.producer.KafkaProducer$$EnhancerByMockitoWithCGLIB$$6ea769e4.send()

  579  [main] INFO  
org.apache.flink.streaming.connectors.kafka.KafkaProducerTest  -

Test 
testPropagateExceptions(org.apache.flink.streaming.connectors.kafka.KafkaProducerTest)
 successfully run.



---
 T E S T S
---
Running org.apache.flink.python.api.PythonPlanBinderTest
log4j:WARN No appenders could be found for logger 
(org.apache.flink.python.api.PythonPlanBinderTest).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.

#
waiting forever...

> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> on windows:
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: 

[jira] [Closed] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2017-01-06 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-5370.

Resolution: Fixed

> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> on windows:
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: /C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
> 068/another_file.bin
> Tests in error:
>   GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
> char...
> Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0
> 
> head commit bfdaa3821c71f9fa3a3ff85f56154995d98b18b5
> on osx:
> Results :
> Failed tests:
>   BlobCacheSuccessTest.testBlobCache:108 Could not connect to BlobServer at 
> address 0.0.0.0/0.0.0.0:63065
>   BlobLibraryCacheManagerTest.testLibraryCacheManagerCleanup:114 Could not 
> connect to BlobServer at address 0.0.0.0/0.0.0.0:63143
>   ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:68 Can't 
> assign requested address
> Tests in error:
>   CancelPartitionRequestTest.testCancelPartitionRequest » Bind Can't assign 
> requ...
>   CancelPartitionRequestTest.testDuplicateCancel » Bind Can't assign 
> requested a...
>   ClientTransportErrorHandlingTest.testExceptionOnRemoteClose » Bind Can't 
> assig...
>   ClientTransportErrorHandlingTest.testExceptionOnWrite » Bind Can't assign 
> requ...
>   NettyConnectionManagerTest.testManualConfiguration » Bind Can't assign 
> request...
>   NettyConnectionManagerTest.testMatchingNumberOfArenasAndThreadsAsDefault » 
> Bind
>   NettyServerLowAndHighWatermarkTest.testLowAndHighWatermarks » Bind Can't 
> assig...
>   ServerTransportErrorHandlingTest.testRemoteClose » Bind Can't assign 
> requested...
>   QueryableStateClientTest.testIntegrationWithKvStateServer » Bind Can't 
> assign ...
>   KvStateClientTest.testClientServerIntegration » Bind Can't assign requested 
> ad...
>   KvStateClientTest.testConcurrentQueries » Bind Can't assign requested 
> address
>   KvStateClientTest.testFailureClosesChannel » Bind Can't assign requested 
> addre...
>   KvStateClientTest.testServerClosesChannel » Bind Can't assign requested 
> addres...
>   KvStateClientTest.testSimpleRequests » Bind Can't assign requested address
>   KvStateServerTest.testSimpleRequest » Bind Can't assign requested address
> Tests run: 1266, Failures: 3, Errors: 15, Skipped: 3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2017-01-06 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15804596#comment-15804596
 ] 

shijinkui commented on FLINK-5370:
--

[~StephanEwen] Now it's OK. mvn clean test, now passed.

[INFO] flink-fs-tests . SUCCESS [ 18.780 s]
[INFO] flink-java8  SUCCESS [  3.109 s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 47:38 min
[INFO] Finished at: 2017-01-06T21:52:18+08:00
[INFO] Final Memory: 186M/592M

> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> on windows:
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: /C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
> 068/another_file.bin
> Tests in error:
>   GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
> char...
> Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0
> 
> head commit bfdaa3821c71f9fa3a3ff85f56154995d98b18b5
> on osx:
> Results :
> Failed tests:
>   BlobCacheSuccessTest.testBlobCache:108 Could not connect to BlobServer at 
> address 0.0.0.0/0.0.0.0:63065
>   BlobLibraryCacheManagerTest.testLibraryCacheManagerCleanup:114 Could not 
> connect to BlobServer at address 0.0.0.0/0.0.0.0:63143
>   ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:68 Can't 
> assign requested address
> Tests in error:
>   CancelPartitionRequestTest.testCancelPartitionRequest » Bind Can't assign 
> requ...
>   CancelPartitionRequestTest.testDuplicateCancel » Bind Can't assign 
> requested a...
>   ClientTransportErrorHandlingTest.testExceptionOnRemoteClose » Bind Can't 
> assig...
>   ClientTransportErrorHandlingTest.testExceptionOnWrite » Bind Can't assign 
> requ...
>   NettyConnectionManagerTest.testManualConfiguration » Bind Can't assign 
> request...
>   NettyConnectionManagerTest.testMatchingNumberOfArenasAndThreadsAsDefault » 
> Bind
>   NettyServerLowAndHighWatermarkTest.testLowAndHighWatermarks » Bind Can't 
> assig...
>   ServerTransportErrorHandlingTest.testRemoteClose » Bind Can't assign 
> requested...
>   QueryableStateClientTest.testIntegrationWithKvStateServer » Bind Can't 
> assign ...
>   KvStateClientTest.testClientServerIntegration » Bind Can't assign requested 
> ad...
>   KvStateClientTest.testConcurrentQueries » Bind Can't assign requested 
> address
>   KvStateClientTest.testFailureClosesChannel » Bind Can't assign requested 
> addre...
>   KvStateClientTest.testServerClosesChannel » Bind Can't assign requested 
> addres...
>   KvStateClientTest.testSimpleRequests » Bind Can't assign requested address
>   KvStateServerTest.testSimpleRequest » Bind Can't assign requested address
> Tests run: 1266, Failures: 3, Errors: 15, Skipped: 3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5399) Add more information to checkpoint result of TriggerSavepointSuccess

2016-12-28 Thread shijinkui (JIRA)
shijinkui created FLINK-5399:


 Summary: Add more information to checkpoint result of 
TriggerSavepointSuccess
 Key: FLINK-5399
 URL: https://issues.apache.org/jira/browse/FLINK-5399
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Reporter: shijinkui


Add checkpointId and triggerTime to TriggerSavepointSuccess

We can record the history of trigger checkpoint out of Flink System.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5395) support locally build distribution by script create_release_files.sh

2016-12-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5395:
-
Description: 
create_release_files.sh is build flink release only. It's hard to build custom 
local Flink release distribution.

Let create_release_files.sh support:
1. custom git repo url
2. custom build special scala and hadoop version
3. add `tools/flink` to .gitignore
4. add usage


  was:
create_release_files.sh is build flink release only. It's hard to build custom 
local Flink release distribution.

Let create_release_files.sh support:
1. custom git repo url
2. custom build special scala and hadoop version
3. fix flink-dist opt.xml have no replace the scala version by 
change-scala-version.sh


> support locally build distribution by script create_release_files.sh
> 
>
> Key: FLINK-5395
> URL: https://issues.apache.org/jira/browse/FLINK-5395
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: shijinkui
>
> create_release_files.sh is build flink release only. It's hard to build 
> custom local Flink release distribution.
> Let create_release_files.sh support:
> 1. custom git repo url
> 2. custom build special scala and hadoop version
> 3. add `tools/flink` to .gitignore
> 4. add usage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5396) flink-dist replace scala version in opt.xml by change-scala-version.sh

2016-12-27 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5396:
-
Component/s: Build System

> flink-dist replace scala version in opt.xml by change-scala-version.sh
> --
>
> Key: FLINK-5396
> URL: https://issues.apache.org/jira/browse/FLINK-5396
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: shijinkui
>
> flink-dist have configured for replacing bin.xml, but not opt.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5396) flink-dist replace scala version in opt.xml by change-scala-version.sh

2016-12-27 Thread shijinkui (JIRA)
shijinkui created FLINK-5396:


 Summary: flink-dist replace scala version in opt.xml by 
change-scala-version.sh
 Key: FLINK-5396
 URL: https://issues.apache.org/jira/browse/FLINK-5396
 Project: Flink
  Issue Type: Improvement
Reporter: shijinkui


flink-dist have configured for replacing bin.xml, but not opt.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5395) support locally build distribution by script create_release_files.sh

2016-12-27 Thread shijinkui (JIRA)
shijinkui created FLINK-5395:


 Summary: support locally build distribution by script 
create_release_files.sh
 Key: FLINK-5395
 URL: https://issues.apache.org/jira/browse/FLINK-5395
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Reporter: shijinkui


create_release_files.sh is build flink release only. It's hard to build custom 
local Flink release distribution.

Let create_release_files.sh support:
1. custom git repo url
2. custom build special scala and hadoop version
3. fix flink-dist opt.xml have no replace the scala version by 
change-scala-version.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (FLINK-5175) StreamExecutionEnvironment's set function return `this` instead of void

2016-12-21 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui closed FLINK-5175.

Resolution: Won't Fix

> StreamExecutionEnvironment's set function return `this` instead of void
> ---
>
> Key: FLINK-5175
> URL: https://issues.apache.org/jira/browse/FLINK-5175
> Project: Flink
>  Issue Type: Sub-task
>  Components: DataStream API
>Reporter: shijinkui
> Fix For: 2.0.0
>
>
> from FLINK-5167.
> for example :
> public void setNumberOfExecutionRetries(int numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); }
> change to:
> public StreamExecutionEnvironment setNumberOfExecutionRetries(int 
> numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); return this; }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2016-12-21 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5370:
-
Description: 
mvn clean package
head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
on windows:

Results :

Failed tests:
  FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
her_file.bin
  FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
068/another_file.bin
Tests in error:
  GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
char...

Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0



head commit bfdaa3821c71f9fa3a3ff85f56154995d98b18b5
on osx:

Results :

Failed tests:
  BlobCacheSuccessTest.testBlobCache:108 Could not connect to BlobServer at 
address 0.0.0.0/0.0.0.0:63065
  BlobLibraryCacheManagerTest.testLibraryCacheManagerCleanup:114 Could not 
connect to BlobServer at address 0.0.0.0/0.0.0.0:63143
  ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:68 Can't assign 
requested address
Tests in error:
  CancelPartitionRequestTest.testCancelPartitionRequest » Bind Can't assign 
requ...
  CancelPartitionRequestTest.testDuplicateCancel » Bind Can't assign requested 
a...
  ClientTransportErrorHandlingTest.testExceptionOnRemoteClose » Bind Can't 
assig...
  ClientTransportErrorHandlingTest.testExceptionOnWrite » Bind Can't assign 
requ...
  NettyConnectionManagerTest.testManualConfiguration » Bind Can't assign 
request...
  NettyConnectionManagerTest.testMatchingNumberOfArenasAndThreadsAsDefault » 
Bind
  NettyServerLowAndHighWatermarkTest.testLowAndHighWatermarks » Bind Can't 
assig...
  ServerTransportErrorHandlingTest.testRemoteClose » Bind Can't assign 
requested...
  QueryableStateClientTest.testIntegrationWithKvStateServer » Bind Can't assign 
...
  KvStateClientTest.testClientServerIntegration » Bind Can't assign requested 
ad...
  KvStateClientTest.testConcurrentQueries » Bind Can't assign requested address
  KvStateClientTest.testFailureClosesChannel » Bind Can't assign requested 
addre...
  KvStateClientTest.testServerClosesChannel » Bind Can't assign requested 
addres...
  KvStateClientTest.testSimpleRequests » Bind Can't assign requested address
  KvStateServerTest.testSimpleRequest » Bind Can't assign requested address

Tests run: 1266, Failures: 3, Errors: 15, Skipped: 3


  was:
mvn clean package

head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd


Results :

Failed tests:
  FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
her_file.bin
  FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
068/another_file.bin
Tests in error:
  GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
char...

Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0


> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> on windows:
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: /C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
> 068/another_file.bin
> Tests in error:
>   GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
> char...
> Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0
> 
> head commit bfdaa3821c71f9fa3a3ff85f56154995d98b18b5
> on osx:
> Results :
> Failed tests:
>   BlobCacheSuccessTest.testBlobCache:108 Could not connect to BlobServer at 
> address 0.0.0.0/0.0.0.0:63065
>   BlobLibraryCacheManagerTest.testLibraryCacheManagerCleanup:114 Could not 
> connect to BlobServer at address 0.0.0.0/0.0.0.0:63143
>   ConnectionUtilsTest.testReturnLocalHostAddressUsingHeuristics:68 Can't 
> assign requested address
> Tests in error:
>   CancelPartitionRequestTest.testCancelPartitionRequest » Bind Can't assign 
> requ...
>   CancelPartitionRequestTest.testDuplicateCancel » Bind Can't assign 
> requested a...
>   ClientTransportErrorHandlingTest.testExceptionOnRemoteClose » Bind Can't 
> assig...
>   ClientTransportErrorHandlingTest.testExceptionOnWrite » Bind Can't assign 
> requ...
>   

[jira] [Comment Edited] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2016-12-20 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766023#comment-15766023
 ] 

shijinkui edited comment on FLINK-5370 at 12/21/16 3:52 AM:


run on windows. This afternoon i'll test on OSX again.


was (Author: shijinkui):
window

> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: /C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
> 068/another_file.bin
> Tests in error:
>   GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
> char...
> Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2016-12-20 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15766023#comment-15766023
 ] 

shijinkui commented on FLINK-5370:
--

window

> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: /C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
> 068/another_file.bin
> Tests in error:
>   GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
> char...
> Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2016-12-19 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5370:
-
Description: 
mvn clean package

head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd


Results :

Failed tests:
  FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
her_file.bin
  FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 2: 
/C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
068/another_file.bin
Tests in error:
  GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
char...

Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0

  was:
mvn clean package

head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd


Results :

Failed tests:
  FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
/C:/Users/S00383~1/AppData/Local/Temp/junit2200257114857246164/anot
her_file.bin
  FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 2: 
/C:/Users/S00383~1/AppData/Local/Temp/junit1476821885889426
068/another_file.bin
Tests in error:
  GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
char...

Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0


> build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest
> -
>
> Key: FLINK-5370
> URL: https://issues.apache.org/jira/browse/FLINK-5370
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: shijinkui
>
> mvn clean package
> head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd
> Results :
> Failed tests:
>   FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
> /C:/Users/sjk/AppData/Local/Temp/junit2200257114857246164/anot
> her_file.bin
>   FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 
> 2: /C:/Users/sjk/AppData/Local/Temp/junit1476821885889426
> 068/another_file.bin
> Tests in error:
>   GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
> char...
> Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5370) build failure for unit test of FileInputFormatTest and GlobFilePathFilterTest

2016-12-19 Thread shijinkui (JIRA)
shijinkui created FLINK-5370:


 Summary: build failure for unit test of FileInputFormatTest and 
GlobFilePathFilterTest
 Key: FLINK-5370
 URL: https://issues.apache.org/jira/browse/FLINK-5370
 Project: Flink
  Issue Type: Test
  Components: Tests
Reporter: shijinkui


mvn clean package

head commit 4a27d2105dd08f323c0be26e79a55986aa97e7bd


Results :

Failed tests:
  FileInputFormatTest.testExcludeFiles:336 Illegal char <:> at index 2: 
/C:/Users/S00383~1/AppData/Local/Temp/junit2200257114857246164/anot
her_file.bin
  FileInputFormatTest.testReadMultiplePatterns:369 Illegal char <:> at index 2: 
/C:/Users/S00383~1/AppData/Local/Temp/junit1476821885889426
068/another_file.bin
Tests in error:
  GlobFilePathFilterTest.excludeFilenameWithStart:115 ? InvalidPath Illegal 
char...

Tests run: 2084, Failures: 2, Errors: 1, Skipped: 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5221) Checkpointed workless in Window Operator

2016-12-05 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724302#comment-15724302
 ] 

shijinkui commented on FLINK-5221:
--

This problem is Java low level API used Scala high level API's class, but Java 
API haven't dependency Scala API. They are all added into same classpath, 
otherwise it will have a class not found runtime exception.

We can add new interface function `getUserFunction` for return the useFunction 
object.

In general, Flink Java API and Scala API,which one more suitable to as a low 
level API. I think scala tait is better than Java interface.
[~till.rohrmann], do we have plan to have an unified API in the future, for 
example 2.0?

> Checkpointed workless in Window Operator
> 
>
> Key: FLINK-5221
> URL: https://issues.apache.org/jira/browse/FLINK-5221
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 1.2.0, 1.1.3
> Environment: SUSE
>Reporter: Syinchwun Leo
>  Labels: windows
> Fix For: 1.2.0
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> When window OPERATOR making checkpoint like this:
> class WindowStatistic extends WindowFunction[Event, Int, Tuple, TimeWindow] 
> with Checkpointed[Option[List[Event]]] {
> override def appley() 
> override def snapshotState()...
> override def restoreState()
> }
> Window Operator couldn't invoke user defined function "snapshotState()". In 
> debug model, line 123 in AbstractUdfStreamOperator.java returns false and 
> can't make user defined state when checking the window whether is a 
> Checkpointed instance. I think there is something wrong in userFunction var, 
> it's a ScalaWindowFunctionWrapper object and it couldn't reflect if the user 
> defined window extend Checkpointed Interface. Actually, the user defined 
> window is kept in "func" var of userFunction. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5217) Deprecated interface Checkpointed make clear suggestion

2016-12-01 Thread shijinkui (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15711460#comment-15711460
 ] 

shijinkui commented on FLINK-5217:
--

hi, [~StephanEwen] , can you complete this? 

> Deprecated interface Checkpointed make clear suggestion
> ---
>
> Key: FLINK-5217
> URL: https://issues.apache.org/jira/browse/FLINK-5217
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: shijinkui
>
> package org.apache.flink.streaming.api.checkpoint;
> @Deprecated
> @PublicEvolving
> public interface Checkpointed extends 
> CheckpointedRestoring
> this interface should have clear suggestion which version to give up this 
> interface, and which interface can instead of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5217) Deprecated interface Checkpointed make clear suggestion

2016-12-01 Thread shijinkui (JIRA)
shijinkui created FLINK-5217:


 Summary: Deprecated interface Checkpointed make clear suggestion
 Key: FLINK-5217
 URL: https://issues.apache.org/jira/browse/FLINK-5217
 Project: Flink
  Issue Type: Improvement
  Components: DataStream API
Reporter: shijinkui



package org.apache.flink.streaming.api.checkpoint;
@Deprecated
@PublicEvolving
public interface Checkpointed extends 
CheckpointedRestoring

this interface should have clear suggestion which version to give up this 
interface, and which interface can instead of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5175) StreamExecutionEnvironment's set function return `this` instead of void

2016-11-28 Thread shijinkui (JIRA)
shijinkui created FLINK-5175:


 Summary: StreamExecutionEnvironment's set function return `this` 
instead of void
 Key: FLINK-5175
 URL: https://issues.apache.org/jira/browse/FLINK-5175
 Project: Flink
  Issue Type: Sub-task
  Components: DataStream API
Reporter: shijinkui


for example :
public void setNumberOfExecutionRetries(int numberOfExecutionRetries)
{ config.setNumberOfExecutionRetries(numberOfExecutionRetries); }
change to:
public StreamExecutionEnvironment setNumberOfExecutionRetries(int 
numberOfExecutionRetries)
{ config.setNumberOfExecutionRetries(numberOfExecutionRetries); return this; }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-5175) StreamExecutionEnvironment's set function return `this` instead of void

2016-11-28 Thread shijinkui (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shijinkui updated FLINK-5175:
-
Description: 
from FLINK-5167.
for example :
public void setNumberOfExecutionRetries(int numberOfExecutionRetries)
{ config.setNumberOfExecutionRetries(numberOfExecutionRetries); }
change to:
public StreamExecutionEnvironment setNumberOfExecutionRetries(int 
numberOfExecutionRetries)
{ config.setNumberOfExecutionRetries(numberOfExecutionRetries); return this; }

  was:
for example :
public void setNumberOfExecutionRetries(int numberOfExecutionRetries)
{ config.setNumberOfExecutionRetries(numberOfExecutionRetries); }
change to:
public StreamExecutionEnvironment setNumberOfExecutionRetries(int 
numberOfExecutionRetries)
{ config.setNumberOfExecutionRetries(numberOfExecutionRetries); return this; }


> StreamExecutionEnvironment's set function return `this` instead of void
> ---
>
> Key: FLINK-5175
> URL: https://issues.apache.org/jira/browse/FLINK-5175
> Project: Flink
>  Issue Type: Sub-task
>  Components: DataStream API
>Reporter: shijinkui
> Fix For: 2.0.0
>
>
> from FLINK-5167.
> for example :
> public void setNumberOfExecutionRetries(int numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); }
> change to:
> public StreamExecutionEnvironment setNumberOfExecutionRetries(int 
> numberOfExecutionRetries)
> { config.setNumberOfExecutionRetries(numberOfExecutionRetries); return this; }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >